annotate toolboxes/FullBNT-1.0.7/nethelp3.3/graddesc.htm @ 0:cc4b1211e677 tip

initial commit to HG from Changeset: 646 (e263d8a21543) added further path and more save "camirversion.m"
author Daniel Wolff
date Fri, 19 Aug 2016 13:07:06 +0200
parents
children
rev   line source
Daniel@0 1 <html>
Daniel@0 2 <head>
Daniel@0 3 <title>
Daniel@0 4 Netlab Reference Manual graddesc
Daniel@0 5 </title>
Daniel@0 6 </head>
Daniel@0 7 <body>
Daniel@0 8 <H1> graddesc
Daniel@0 9 </H1>
Daniel@0 10 <h2>
Daniel@0 11 Purpose
Daniel@0 12 </h2>
Daniel@0 13 Gradient descent optimization.
Daniel@0 14
Daniel@0 15 <p><h2>
Daniel@0 16 Description
Daniel@0 17 </h2>
Daniel@0 18 <CODE>[x, options, flog, pointlog] = graddesc(f, x, options, gradf)</CODE> uses
Daniel@0 19 batch gradient descent to find a local minimum of the function
Daniel@0 20 <CODE>f(x)</CODE> whose gradient is given by <CODE>gradf(x)</CODE>. A log of the function values
Daniel@0 21 after each cycle is (optionally) returned in <CODE>errlog</CODE>, and a log
Daniel@0 22 of the points visited is (optionally) returned in <CODE>pointlog</CODE>.
Daniel@0 23
Daniel@0 24 <p>Note that <CODE>x</CODE> is a row vector
Daniel@0 25 and <CODE>f</CODE> returns a scalar value.
Daniel@0 26 The point at which <CODE>f</CODE> has a local minimum
Daniel@0 27 is returned as <CODE>x</CODE>. The function value at that point is returned
Daniel@0 28 in <CODE>options(8)</CODE>.
Daniel@0 29
Daniel@0 30 <p><CODE>graddesc(f, x, options, gradf, p1, p2, ...)</CODE> allows
Daniel@0 31 additional arguments to be passed to <CODE>f()</CODE> and <CODE>gradf()</CODE>.
Daniel@0 32
Daniel@0 33 <p>The optional parameters have the following interpretations.
Daniel@0 34
Daniel@0 35 <p><CODE>options(1)</CODE> is set to 1 to display error values; also logs error
Daniel@0 36 values in the return argument <CODE>errlog</CODE>, and the points visited
Daniel@0 37 in the return argument <CODE>pointslog</CODE>. If <CODE>options(1)</CODE> is set to 0,
Daniel@0 38 then only warning messages are displayed. If <CODE>options(1)</CODE> is -1,
Daniel@0 39 then nothing is displayed.
Daniel@0 40
Daniel@0 41 <p><CODE>options(2)</CODE> is the absolute precision required for the value
Daniel@0 42 of <CODE>x</CODE> at the solution. If the absolute difference between
Daniel@0 43 the values of <CODE>x</CODE> between two successive steps is less than
Daniel@0 44 <CODE>options(2)</CODE>, then this condition is satisfied.
Daniel@0 45
Daniel@0 46 <p><CODE>options(3)</CODE> is a measure of the precision required of the objective
Daniel@0 47 function at the solution. If the absolute difference between the
Daniel@0 48 objective function values between two successive steps is less than
Daniel@0 49 <CODE>options(3)</CODE>, then this condition is satisfied.
Daniel@0 50 Both this and the previous condition must be
Daniel@0 51 satisfied for termination.
Daniel@0 52
Daniel@0 53 <p><CODE>options(7)</CODE> determines the line minimisation method used. If it
Daniel@0 54 is set to 1 then a line minimiser is used (in the direction of the negative
Daniel@0 55 gradient). If it is 0 (the default), then each parameter update
Daniel@0 56 is a fixed multiple (the learning rate)
Daniel@0 57 of the negative gradient added to a fixed multiple (the momentum) of
Daniel@0 58 the previous parameter update.
Daniel@0 59
Daniel@0 60 <p><CODE>options(9)</CODE> should be set to 1 to check the user defined gradient
Daniel@0 61 function <CODE>gradf</CODE> with <CODE>gradchek</CODE>. This is carried out at
Daniel@0 62 the initial parameter vector <CODE>x</CODE>.
Daniel@0 63
Daniel@0 64 <p><CODE>options(10)</CODE> returns the total number of function evaluations (including
Daniel@0 65 those in any line searches).
Daniel@0 66
Daniel@0 67 <p><CODE>options(11)</CODE> returns the total number of gradient evaluations.
Daniel@0 68
Daniel@0 69 <p><CODE>options(14)</CODE> is the maximum number of iterations; default 100.
Daniel@0 70
Daniel@0 71 <p><CODE>options(15)</CODE> is the precision in parameter space of the line search;
Daniel@0 72 default <CODE>foptions(2)</CODE>.
Daniel@0 73
Daniel@0 74 <p><CODE>options(17)</CODE> is the momentum; default 0.5. It should be scaled by the
Daniel@0 75 inverse of the number of data points.
Daniel@0 76
Daniel@0 77 <p><CODE>options(18)</CODE> is the learning rate; default 0.01. It should be
Daniel@0 78 scaled by the inverse of the number of data points.
Daniel@0 79
Daniel@0 80 <p><h2>
Daniel@0 81 Examples
Daniel@0 82 </h2>
Daniel@0 83 An example of how this function can be used to train a neural network is:
Daniel@0 84 <PRE>
Daniel@0 85
Daniel@0 86 options = zeros(1, 18);
Daniel@0 87 options(17) = 0.1/size(x, 1);
Daniel@0 88 net = netopt(net, options, x, t, 'graddesc');
Daniel@0 89 </PRE>
Daniel@0 90
Daniel@0 91 Note how the learning rate is scaled by the number of data points.
Daniel@0 92
Daniel@0 93 <p><h2>
Daniel@0 94 See Also
Daniel@0 95 </h2>
Daniel@0 96 <CODE><a href="conjgrad.htm">conjgrad</a></CODE>, <CODE><a href="linemin.htm">linemin</a></CODE>, <CODE><a href="olgd.htm">olgd</a></CODE>, <CODE><a href="minbrack.htm">minbrack</a></CODE>, <CODE><a href="quasinew.htm">quasinew</a></CODE>, <CODE><a href="scg.htm">scg</a></CODE><hr>
Daniel@0 97 <b>Pages:</b>
Daniel@0 98 <a href="index.htm">Index</a>
Daniel@0 99 <hr>
Daniel@0 100 <p>Copyright (c) Ian T Nabney (1996-9)
Daniel@0 101
Daniel@0 102
Daniel@0 103 </body>
Daniel@0 104 </html>