annotate toolboxes/FullBNT-1.0.7/nethelp3.3/glmtrain.htm @ 0:cc4b1211e677 tip

initial commit to HG from Changeset: 646 (e263d8a21543) added further path and more save "camirversion.m"
author Daniel Wolff
date Fri, 19 Aug 2016 13:07:06 +0200
parents
children
rev   line source
Daniel@0 1 <html>
Daniel@0 2 <head>
Daniel@0 3 <title>
Daniel@0 4 Netlab Reference Manual glmtrain
Daniel@0 5 </title>
Daniel@0 6 </head>
Daniel@0 7 <body>
Daniel@0 8 <H1> glmtrain
Daniel@0 9 </H1>
Daniel@0 10 <h2>
Daniel@0 11 Purpose
Daniel@0 12 </h2>
Daniel@0 13 Specialised training of generalized linear model
Daniel@0 14
Daniel@0 15 <p><h2>
Daniel@0 16 Description
Daniel@0 17 </h2>
Daniel@0 18 <CODE>net = glmtrain(net, options, x, t)</CODE> uses
Daniel@0 19 the iterative reweighted least squares (IRLS)
Daniel@0 20 algorithm to set the weights in the generalized linear model structure
Daniel@0 21 <CODE>net</CODE>. This is a more efficient alternative to using <CODE>glmerr</CODE>
Daniel@0 22 and <CODE>glmgrad</CODE> and a non-linear optimisation routine through
Daniel@0 23 <CODE>netopt</CODE>.
Daniel@0 24 Note that for linear outputs, a single pass through the
Daniel@0 25 algorithm is all that is required, since the error function is quadratic in
Daniel@0 26 the weights. The algorithm also handles scalar <CODE>alpha</CODE> and <CODE>beta</CODE>
Daniel@0 27 terms. If you want to use more complicated priors, you should use
Daniel@0 28 general-purpose non-linear optimisation algorithms.
Daniel@0 29
Daniel@0 30 <p>For logistic and softmax outputs, general priors can be handled, although
Daniel@0 31 this requires the pseudo-inverse of the Hessian, giving up the better
Daniel@0 32 conditioning and some of the speed advantage of the normal form equations.
Daniel@0 33
Daniel@0 34 <p>The error function value at the final set of weights is returned
Daniel@0 35 in <CODE>options(8)</CODE>.
Daniel@0 36 Each row of <CODE>x</CODE> corresponds to one
Daniel@0 37 input vector and each row of <CODE>t</CODE> corresponds to one target vector.
Daniel@0 38
Daniel@0 39 <p>The optional parameters have the following interpretations.
Daniel@0 40
Daniel@0 41 <p><CODE>options(1)</CODE> is set to 1 to display error values during training.
Daniel@0 42 If <CODE>options(1)</CODE> is set to 0,
Daniel@0 43 then only warning messages are displayed. If <CODE>options(1)</CODE> is -1,
Daniel@0 44 then nothing is displayed.
Daniel@0 45
Daniel@0 46 <p><CODE>options(2)</CODE> is a measure of the precision required for the value
Daniel@0 47 of the weights <CODE>w</CODE> at the solution.
Daniel@0 48
Daniel@0 49 <p><CODE>options(3)</CODE> is a measure of the precision required of the objective
Daniel@0 50 function at the solution. Both this and the previous condition must be
Daniel@0 51 satisfied for termination.
Daniel@0 52
Daniel@0 53 <p><CODE>options(5)</CODE> is set to 1 if an approximation to the Hessian (which assumes
Daniel@0 54 that all outputs are independent) is used for softmax outputs. With the default
Daniel@0 55 value of 0 the exact Hessian (which is more expensive to compute) is used.
Daniel@0 56
Daniel@0 57 <p><CODE>options(14)</CODE> is the maximum number of iterations for the IRLS algorithm;
Daniel@0 58 default 100.
Daniel@0 59
Daniel@0 60 <p><h2>
Daniel@0 61 See Also
Daniel@0 62 </h2>
Daniel@0 63 <CODE><a href="glm.htm">glm</a></CODE>, <CODE><a href="glmerr.htm">glmerr</a></CODE>, <CODE><a href="glmgrad.htm">glmgrad</a></CODE><hr>
Daniel@0 64 <b>Pages:</b>
Daniel@0 65 <a href="index.htm">Index</a>
Daniel@0 66 <hr>
Daniel@0 67 <p>Copyright (c) Ian T Nabney (1996-9)
Daniel@0 68
Daniel@0 69
Daniel@0 70 </body>
Daniel@0 71 </html>