annotate toolboxes/FullBNT-1.0.7/nethelp3.3/glmtrain.htm @ 0:e9a9cd732c1e tip

first hg version after svn
author wolffd
date Tue, 10 Feb 2015 15:05:51 +0000
parents
children
rev   line source
wolffd@0 1 <html>
wolffd@0 2 <head>
wolffd@0 3 <title>
wolffd@0 4 Netlab Reference Manual glmtrain
wolffd@0 5 </title>
wolffd@0 6 </head>
wolffd@0 7 <body>
wolffd@0 8 <H1> glmtrain
wolffd@0 9 </H1>
wolffd@0 10 <h2>
wolffd@0 11 Purpose
wolffd@0 12 </h2>
wolffd@0 13 Specialised training of generalized linear model
wolffd@0 14
wolffd@0 15 <p><h2>
wolffd@0 16 Description
wolffd@0 17 </h2>
wolffd@0 18 <CODE>net = glmtrain(net, options, x, t)</CODE> uses
wolffd@0 19 the iterative reweighted least squares (IRLS)
wolffd@0 20 algorithm to set the weights in the generalized linear model structure
wolffd@0 21 <CODE>net</CODE>. This is a more efficient alternative to using <CODE>glmerr</CODE>
wolffd@0 22 and <CODE>glmgrad</CODE> and a non-linear optimisation routine through
wolffd@0 23 <CODE>netopt</CODE>.
wolffd@0 24 Note that for linear outputs, a single pass through the
wolffd@0 25 algorithm is all that is required, since the error function is quadratic in
wolffd@0 26 the weights. The algorithm also handles scalar <CODE>alpha</CODE> and <CODE>beta</CODE>
wolffd@0 27 terms. If you want to use more complicated priors, you should use
wolffd@0 28 general-purpose non-linear optimisation algorithms.
wolffd@0 29
wolffd@0 30 <p>For logistic and softmax outputs, general priors can be handled, although
wolffd@0 31 this requires the pseudo-inverse of the Hessian, giving up the better
wolffd@0 32 conditioning and some of the speed advantage of the normal form equations.
wolffd@0 33
wolffd@0 34 <p>The error function value at the final set of weights is returned
wolffd@0 35 in <CODE>options(8)</CODE>.
wolffd@0 36 Each row of <CODE>x</CODE> corresponds to one
wolffd@0 37 input vector and each row of <CODE>t</CODE> corresponds to one target vector.
wolffd@0 38
wolffd@0 39 <p>The optional parameters have the following interpretations.
wolffd@0 40
wolffd@0 41 <p><CODE>options(1)</CODE> is set to 1 to display error values during training.
wolffd@0 42 If <CODE>options(1)</CODE> is set to 0,
wolffd@0 43 then only warning messages are displayed. If <CODE>options(1)</CODE> is -1,
wolffd@0 44 then nothing is displayed.
wolffd@0 45
wolffd@0 46 <p><CODE>options(2)</CODE> is a measure of the precision required for the value
wolffd@0 47 of the weights <CODE>w</CODE> at the solution.
wolffd@0 48
wolffd@0 49 <p><CODE>options(3)</CODE> is a measure of the precision required of the objective
wolffd@0 50 function at the solution. Both this and the previous condition must be
wolffd@0 51 satisfied for termination.
wolffd@0 52
wolffd@0 53 <p><CODE>options(5)</CODE> is set to 1 if an approximation to the Hessian (which assumes
wolffd@0 54 that all outputs are independent) is used for softmax outputs. With the default
wolffd@0 55 value of 0 the exact Hessian (which is more expensive to compute) is used.
wolffd@0 56
wolffd@0 57 <p><CODE>options(14)</CODE> is the maximum number of iterations for the IRLS algorithm;
wolffd@0 58 default 100.
wolffd@0 59
wolffd@0 60 <p><h2>
wolffd@0 61 See Also
wolffd@0 62 </h2>
wolffd@0 63 <CODE><a href="glm.htm">glm</a></CODE>, <CODE><a href="glmerr.htm">glmerr</a></CODE>, <CODE><a href="glmgrad.htm">glmgrad</a></CODE><hr>
wolffd@0 64 <b>Pages:</b>
wolffd@0 65 <a href="index.htm">Index</a>
wolffd@0 66 <hr>
wolffd@0 67 <p>Copyright (c) Ian T Nabney (1996-9)
wolffd@0 68
wolffd@0 69
wolffd@0 70 </body>
wolffd@0 71 </html>