wolffd@0: function [x, options, flog, pointlog] = graddesc(f, x, options, gradf, ... wolffd@0: varargin) wolffd@0: %GRADDESC Gradient descent optimization. wolffd@0: % wolffd@0: % Description wolffd@0: % [X, OPTIONS, FLOG, POINTLOG] = GRADDESC(F, X, OPTIONS, GRADF) uses wolffd@0: % batch gradient descent to find a local minimum of the function F(X) wolffd@0: % whose gradient is given by GRADF(X). A log of the function values wolffd@0: % after each cycle is (optionally) returned in ERRLOG, and a log of the wolffd@0: % points visited is (optionally) returned in POINTLOG. wolffd@0: % wolffd@0: % Note that X is a row vector and F returns a scalar value. The point wolffd@0: % at which F has a local minimum is returned as X. The function value wolffd@0: % at that point is returned in OPTIONS(8). wolffd@0: % wolffd@0: % GRADDESC(F, X, OPTIONS, GRADF, P1, P2, ...) allows additional wolffd@0: % arguments to be passed to F() and GRADF(). wolffd@0: % wolffd@0: % The optional parameters have the following interpretations. wolffd@0: % wolffd@0: % OPTIONS(1) is set to 1 to display error values; also logs error wolffd@0: % values in the return argument ERRLOG, and the points visited in the wolffd@0: % return argument POINTSLOG. If OPTIONS(1) is set to 0, then only wolffd@0: % warning messages are displayed. If OPTIONS(1) is -1, then nothing is wolffd@0: % displayed. wolffd@0: % wolffd@0: % OPTIONS(2) is the absolute precision required for the value of X at wolffd@0: % the solution. If the absolute difference between the values of X wolffd@0: % between two successive steps is less than OPTIONS(2), then this wolffd@0: % condition is satisfied. wolffd@0: % wolffd@0: % OPTIONS(3) is a measure of the precision required of the objective wolffd@0: % function at the solution. If the absolute difference between the wolffd@0: % objective function values between two successive steps is less than wolffd@0: % OPTIONS(3), then this condition is satisfied. Both this and the wolffd@0: % previous condition must be satisfied for termination. wolffd@0: % wolffd@0: % OPTIONS(7) determines the line minimisation method used. If it is wolffd@0: % set to 1 then a line minimiser is used (in the direction of the wolffd@0: % negative gradient). If it is 0 (the default), then each parameter wolffd@0: % update is a fixed multiple (the learning rate) of the negative wolffd@0: % gradient added to a fixed multiple (the momentum) of the previous wolffd@0: % parameter update. wolffd@0: % wolffd@0: % OPTIONS(9) should be set to 1 to check the user defined gradient wolffd@0: % function GRADF with GRADCHEK. This is carried out at the initial wolffd@0: % parameter vector X. wolffd@0: % wolffd@0: % OPTIONS(10) returns the total number of function evaluations wolffd@0: % (including those in any line searches). wolffd@0: % wolffd@0: % OPTIONS(11) returns the total number of gradient evaluations. wolffd@0: % wolffd@0: % OPTIONS(14) is the maximum number of iterations; default 100. wolffd@0: % wolffd@0: % OPTIONS(15) is the precision in parameter space of the line search; wolffd@0: % default FOPTIONS(2). wolffd@0: % wolffd@0: % OPTIONS(17) is the momentum; default 0.5. It should be scaled by the wolffd@0: % inverse of the number of data points. wolffd@0: % wolffd@0: % OPTIONS(18) is the learning rate; default 0.01. It should be scaled wolffd@0: % by the inverse of the number of data points. wolffd@0: % wolffd@0: % See also wolffd@0: % CONJGRAD, LINEMIN, OLGD, MINBRACK, QUASINEW, SCG wolffd@0: % wolffd@0: wolffd@0: % Copyright (c) Ian T Nabney (1996-2001) wolffd@0: wolffd@0: % Set up the options. wolffd@0: if length(options) < 18 wolffd@0: error('Options vector too short') wolffd@0: end wolffd@0: wolffd@0: if (options(14)) wolffd@0: niters = options(14); wolffd@0: else wolffd@0: niters = 100; wolffd@0: end wolffd@0: wolffd@0: line_min_flag = 0; % Flag for line minimisation option wolffd@0: if (round(options(7)) == 1) wolffd@0: % Use line minimisation wolffd@0: line_min_flag = 1; wolffd@0: % Set options for line minimiser wolffd@0: line_options = foptions; wolffd@0: if options(15) > 0 wolffd@0: line_options(2) = options(15); wolffd@0: end wolffd@0: else wolffd@0: % Learning rate: must be positive wolffd@0: if (options(18) > 0) wolffd@0: eta = options(18); wolffd@0: else wolffd@0: eta = 0.01; wolffd@0: end wolffd@0: % Momentum term: allow zero momentum wolffd@0: if (options(17) >= 0) wolffd@0: mu = options(17); wolffd@0: else wolffd@0: mu = 0.5; wolffd@0: end wolffd@0: end wolffd@0: wolffd@0: % Check function string wolffd@0: f = fcnchk(f, length(varargin)); wolffd@0: gradf = fcnchk(gradf, length(varargin)); wolffd@0: wolffd@0: % Display information if options(1) > 0 wolffd@0: display = options(1) > 0; wolffd@0: wolffd@0: % Work out if we need to compute f at each iteration. wolffd@0: % Needed if using line search or if display results or if termination wolffd@0: % criterion requires it. wolffd@0: fcneval = (options(7) | display | options(3)); wolffd@0: wolffd@0: % Check gradients wolffd@0: if (options(9) > 0) wolffd@0: feval('gradchek', x, f, gradf, varargin{:}); wolffd@0: end wolffd@0: wolffd@0: dxold = zeros(1, size(x, 2)); wolffd@0: xold = x; wolffd@0: fold = 0; % Must be initialised so that termination test can be performed wolffd@0: if fcneval wolffd@0: fnew = feval(f, x, varargin{:}); wolffd@0: options(10) = options(10) + 1; wolffd@0: fold = fnew; wolffd@0: end wolffd@0: wolffd@0: % Main optimization loop. wolffd@0: for j = 1:niters wolffd@0: xold = x; wolffd@0: grad = feval(gradf, x, varargin{:}); wolffd@0: options(11) = options(11) + 1; % Increment gradient evaluation counter wolffd@0: if (line_min_flag ~= 1) wolffd@0: dx = mu*dxold - eta*grad; wolffd@0: x = x + dx; wolffd@0: dxold = dx; wolffd@0: if fcneval wolffd@0: fold = fnew; wolffd@0: fnew = feval(f, x, varargin{:}); wolffd@0: options(10) = options(10) + 1; wolffd@0: end wolffd@0: else wolffd@0: sd = - grad./norm(grad); % New search direction. wolffd@0: fold = fnew; wolffd@0: % Do a line search: normalise search direction to have length 1 wolffd@0: [lmin, line_options] = feval('linemin', f, x, sd, fold, ... wolffd@0: line_options, varargin{:}); wolffd@0: options(10) = options(10) + line_options(10); wolffd@0: x = xold + lmin*sd; wolffd@0: fnew = line_options(8); wolffd@0: end wolffd@0: if nargout >= 3 wolffd@0: flog(j) = fnew; wolffd@0: if nargout >= 4 wolffd@0: pointlog(j, :) = x; wolffd@0: end wolffd@0: end wolffd@0: if display wolffd@0: fprintf(1, 'Cycle %5d Function %11.8f\n', j, fnew); wolffd@0: end wolffd@0: if (max(abs(x - xold)) < options(2) & abs(fnew - fold) < options(3)) wolffd@0: % Termination criteria are met wolffd@0: options(8) = fnew; wolffd@0: return; wolffd@0: end wolffd@0: end wolffd@0: wolffd@0: if fcneval wolffd@0: options(8) = fnew; wolffd@0: else wolffd@0: options(8) = feval(f, x, varargin{:}); wolffd@0: options(10) = options(10) + 1; wolffd@0: end wolffd@0: if (options(1) >= 0) wolffd@0: disp(maxitmess); wolffd@0: end