annotate toolboxes/FullBNT-1.0.7/netlab3.3/quasinew.m @ 0:cc4b1211e677 tip

initial commit to HG from Changeset: 646 (e263d8a21543) added further path and more save "camirversion.m"
author Daniel Wolff
date Fri, 19 Aug 2016 13:07:06 +0200
parents
children
rev   line source
Daniel@0 1 function [x, options, flog, pointlog] = quasinew(f, x, options, gradf, ...
Daniel@0 2 varargin)
Daniel@0 3 %QUASINEW Quasi-Newton optimization.
Daniel@0 4 %
Daniel@0 5 % Description
Daniel@0 6 % [X, OPTIONS, FLOG, POINTLOG] = QUASINEW(F, X, OPTIONS, GRADF) uses a
Daniel@0 7 % quasi-Newton algorithm to find a local minimum of the function F(X)
Daniel@0 8 % whose gradient is given by GRADF(X). Here X is a row vector and F
Daniel@0 9 % returns a scalar value. The point at which F has a local minimum is
Daniel@0 10 % returned as X. The function value at that point is returned in
Daniel@0 11 % OPTIONS(8). A log of the function values after each cycle is
Daniel@0 12 % (optionally) returned in FLOG, and a log of the points visited is
Daniel@0 13 % (optionally) returned in POINTLOG.
Daniel@0 14 %
Daniel@0 15 % QUASINEW(F, X, OPTIONS, GRADF, P1, P2, ...) allows additional
Daniel@0 16 % arguments to be passed to F() and GRADF().
Daniel@0 17 %
Daniel@0 18 % The optional parameters have the following interpretations.
Daniel@0 19 %
Daniel@0 20 % OPTIONS(1) is set to 1 to display error values; also logs error
Daniel@0 21 % values in the return argument ERRLOG, and the points visited in the
Daniel@0 22 % return argument POINTSLOG. If OPTIONS(1) is set to 0, then only
Daniel@0 23 % warning messages are displayed. If OPTIONS(1) is -1, then nothing is
Daniel@0 24 % displayed.
Daniel@0 25 %
Daniel@0 26 % OPTIONS(2) is a measure of the absolute precision required for the
Daniel@0 27 % value of X at the solution. If the absolute difference between the
Daniel@0 28 % values of X between two successive steps is less than OPTIONS(2),
Daniel@0 29 % then this condition is satisfied.
Daniel@0 30 %
Daniel@0 31 % OPTIONS(3) is a measure of the precision required of the objective
Daniel@0 32 % function at the solution. If the absolute difference between the
Daniel@0 33 % objective function values between two successive steps is less than
Daniel@0 34 % OPTIONS(3), then this condition is satisfied. Both this and the
Daniel@0 35 % previous condition must be satisfied for termination.
Daniel@0 36 %
Daniel@0 37 % OPTIONS(9) should be set to 1 to check the user defined gradient
Daniel@0 38 % function.
Daniel@0 39 %
Daniel@0 40 % OPTIONS(10) returns the total number of function evaluations
Daniel@0 41 % (including those in any line searches).
Daniel@0 42 %
Daniel@0 43 % OPTIONS(11) returns the total number of gradient evaluations.
Daniel@0 44 %
Daniel@0 45 % OPTIONS(14) is the maximum number of iterations; default 100.
Daniel@0 46 %
Daniel@0 47 % OPTIONS(15) is the precision in parameter space of the line search;
Daniel@0 48 % default 1E-2.
Daniel@0 49 %
Daniel@0 50 % See also
Daniel@0 51 % CONJGRAD, GRADDESC, LINEMIN, MINBRACK, SCG
Daniel@0 52 %
Daniel@0 53
Daniel@0 54 % Copyright (c) Ian T Nabney (1996-2001)
Daniel@0 55
Daniel@0 56 % Set up the options.
Daniel@0 57 if length(options) < 18
Daniel@0 58 error('Options vector too short')
Daniel@0 59 end
Daniel@0 60
Daniel@0 61 if(options(14))
Daniel@0 62 niters = options(14);
Daniel@0 63 else
Daniel@0 64 niters = 100;
Daniel@0 65 end
Daniel@0 66
Daniel@0 67 % Set up options for line search
Daniel@0 68 line_options = foptions;
Daniel@0 69 % Don't need a very precise line search
Daniel@0 70 if options(15) > 0
Daniel@0 71 line_options(2) = options(15);
Daniel@0 72 else
Daniel@0 73 line_options(2) = 1e-2; % Default
Daniel@0 74 end
Daniel@0 75 % Minimal fractional change in f from Newton step: otherwise do a line search
Daniel@0 76 min_frac_change = 1e-4;
Daniel@0 77
Daniel@0 78 display = options(1);
Daniel@0 79
Daniel@0 80 % Next two lines allow quasinew to work with expression strings
Daniel@0 81 f = fcnchk(f, length(varargin));
Daniel@0 82 gradf = fcnchk(gradf, length(varargin));
Daniel@0 83
Daniel@0 84 % Check gradients
Daniel@0 85 if (options(9))
Daniel@0 86 feval('gradchek', x, f, gradf, varargin{:});
Daniel@0 87 end
Daniel@0 88
Daniel@0 89 nparams = length(x);
Daniel@0 90 fnew = feval(f, x, varargin{:});
Daniel@0 91 options(10) = options(10) + 1;
Daniel@0 92 gradnew = feval(gradf, x, varargin{:});
Daniel@0 93 options(11) = options(11) + 1;
Daniel@0 94 p = -gradnew; % Search direction
Daniel@0 95 hessinv = eye(nparams); % Initialise inverse Hessian to be identity matrix
Daniel@0 96 j = 1;
Daniel@0 97 if nargout >= 3
Daniel@0 98 flog(j, :) = fnew;
Daniel@0 99 if nargout == 4
Daniel@0 100 pointlog(j, :) = x;
Daniel@0 101 end
Daniel@0 102 end
Daniel@0 103
Daniel@0 104 while (j <= niters)
Daniel@0 105
Daniel@0 106 xold = x;
Daniel@0 107 fold = fnew;
Daniel@0 108 gradold = gradnew;
Daniel@0 109
Daniel@0 110 x = xold + p;
Daniel@0 111 fnew = feval(f, x, varargin{:});
Daniel@0 112 options(10) = options(10) + 1;
Daniel@0 113
Daniel@0 114 % This shouldn't occur, but rest of code depends on sd being downhill
Daniel@0 115 if (gradnew*p' >= 0)
Daniel@0 116 p = -p;
Daniel@0 117 if options(1) >= 0
Daniel@0 118 warning('search direction uphill in quasinew');
Daniel@0 119 end
Daniel@0 120 end
Daniel@0 121
Daniel@0 122 % Does the Newton step reduce the function value sufficiently?
Daniel@0 123 if (fnew >= fold + min_frac_change * (gradnew*p'))
Daniel@0 124 % No it doesn't
Daniel@0 125 % Minimize along current search direction: must be less than Newton step
Daniel@0 126 [lmin, line_options] = feval('linemin', f, xold, p, fold, ...
Daniel@0 127 line_options, varargin{:});
Daniel@0 128 options(10) = options(10) + line_options(10);
Daniel@0 129 options(11) = options(11) + line_options(11);
Daniel@0 130 % Correct x and fnew to be the actual search point we have found
Daniel@0 131 x = xold + lmin * p;
Daniel@0 132 p = x - xold;
Daniel@0 133 fnew = line_options(8);
Daniel@0 134 end
Daniel@0 135
Daniel@0 136 % Check for termination
Daniel@0 137 if (max(abs(x - xold)) < options(2) & max(abs(fnew - fold)) < options(3))
Daniel@0 138 options(8) = fnew;
Daniel@0 139 return;
Daniel@0 140 end
Daniel@0 141 gradnew = feval(gradf, x, varargin{:});
Daniel@0 142 options(11) = options(11) + 1;
Daniel@0 143 v = gradnew - gradold;
Daniel@0 144 vdotp = v*p';
Daniel@0 145
Daniel@0 146 % Skip update to inverse Hessian if fac not sufficiently positive
Daniel@0 147 if (vdotp*vdotp > eps*sum(v.^2)*sum(p.^2))
Daniel@0 148 Gv = (hessinv*v')';
Daniel@0 149 vGv = sum(v.*Gv);
Daniel@0 150 u = p./vdotp - Gv./vGv;
Daniel@0 151 % Use BFGS update rule
Daniel@0 152 hessinv = hessinv + (p'*p)/vdotp - (Gv'*Gv)/vGv + vGv*(u'*u);
Daniel@0 153 end
Daniel@0 154
Daniel@0 155 p = -(hessinv * gradnew')';
Daniel@0 156
Daniel@0 157 if (display > 0)
Daniel@0 158 fprintf(1, 'Cycle %4d Function %11.6f\n', j, fnew);
Daniel@0 159 end
Daniel@0 160
Daniel@0 161 j = j + 1;
Daniel@0 162 if nargout >= 3
Daniel@0 163 flog(j, :) = fnew;
Daniel@0 164 if nargout == 4
Daniel@0 165 pointlog(j, :) = x;
Daniel@0 166 end
Daniel@0 167 end
Daniel@0 168 end
Daniel@0 169
Daniel@0 170 % If we get here, then we haven't terminated in the given number of
Daniel@0 171 % iterations.
Daniel@0 172
Daniel@0 173 options(8) = fold;
Daniel@0 174 if (options(1) >= 0)
Daniel@0 175 disp(maxitmess);
Daniel@0 176 end