comparison toolboxes/FullBNT-1.0.7/nethelp3.3/olgd.htm @ 0:e9a9cd732c1e tip

first hg version after svn
author wolffd
date Tue, 10 Feb 2015 15:05:51 +0000
parents
children
comparison
equal deleted inserted replaced
-1:000000000000 0:e9a9cd732c1e
1 <html>
2 <head>
3 <title>
4 Netlab Reference Manual olgd
5 </title>
6 </head>
7 <body>
8 <H1> olgd
9 </H1>
10 <h2>
11 Purpose
12 </h2>
13 On-line gradient descent optimization.
14
15 <p><h2>
16 Description
17 </h2>
18 <CODE>[net, options, errlog, pointlog] = olgd(net, options, x, t)</CODE> uses
19 on-line gradient descent to find a local minimum of the error function for the
20 network
21 <CODE>net</CODE> computed on the input data <CODE>x</CODE> and target values
22 <CODE>t</CODE>. A log of the error values
23 after each cycle is (optionally) returned in <CODE>errlog</CODE>, and a log
24 of the points visited is (optionally) returned in <CODE>pointlog</CODE>.
25 Because the gradient is computed on-line (i.e. after each pattern)
26 this can be quite inefficient in Matlab.
27
28 <p>The error function value at final weight vector is returned
29 in <CODE>options(8)</CODE>.
30
31 <p>The optional parameters have the following interpretations.
32
33 <p><CODE>options(1)</CODE> is set to 1 to display error values; also logs error
34 values in the return argument <CODE>errlog</CODE>, and the points visited
35 in the return argument <CODE>pointslog</CODE>. If <CODE>options(1)</CODE> is set to 0,
36 then only warning messages are displayed. If <CODE>options(1)</CODE> is -1,
37 then nothing is displayed.
38
39 <p><CODE>options(2)</CODE> is the precision required for the value
40 of <CODE>x</CODE> at the solution. If the absolute difference between
41 the values of <CODE>x</CODE> between two successive steps is less than
42 <CODE>options(2)</CODE>, then this condition is satisfied.
43
44 <p><CODE>options(3)</CODE> is the precision required of the objective
45 function at the solution. If the absolute difference between the
46 error functions between two successive steps is less than
47 <CODE>options(3)</CODE>, then this condition is satisfied.
48 Both this and the previous condition must be
49 satisfied for termination. Note that testing the function value at each
50 iteration roughly halves the speed of the algorithm.
51
52 <p><CODE>options(5)</CODE> determines whether the patterns are sampled randomly
53 with replacement. If it is 0 (the default), then patterns are sampled
54 in order.
55
56 <p><CODE>options(6)</CODE> determines if the learning rate decays. If it is 1
57 then the learning rate decays at a rate of <CODE>1/t</CODE>. If it is 0
58 (the default) then the learning rate is constant.
59
60 <p><CODE>options(9)</CODE> should be set to 1 to check the user defined gradient
61 function.
62
63 <p><CODE>options(10)</CODE> returns the total number of function evaluations (including
64 those in any line searches).
65
66 <p><CODE>options(11)</CODE> returns the total number of gradient evaluations.
67
68 <p><CODE>options(14)</CODE> is the maximum number of iterations (passes through
69 the complete pattern set); default 100.
70
71 <p><CODE>options(17)</CODE> is the momentum; default 0.5.
72
73 <p><CODE>options(18)</CODE> is the learning rate; default 0.01.
74
75 <p><h2>
76 Examples
77 </h2>
78 The following example performs on-line gradient descent on an MLP with
79 random sampling from the pattern set.
80 <PRE>
81
82 net = mlp(5, 3, 1, 'linear');
83 options = foptions;
84 options(18) = 0.01;
85 options(5) = 1;
86 net = olgd(net, options, x, t);
87 </PRE>
88
89
90 <p><h2>
91 See Also
92 </h2>
93 <CODE><a href="graddesc.htm">graddesc</a></CODE><hr>
94 <b>Pages:</b>
95 <a href="index.htm">Index</a>
96 <hr>
97 <p>Copyright (c) Ian T Nabney (1996-9)
98
99
100 </body>
101 </html>