wolffd@0: wolffd@0:
wolffd@0:net = glmtrain(net, options, x, t)
uses
wolffd@0: the iterative reweighted least squares (IRLS)
wolffd@0: algorithm to set the weights in the generalized linear model structure
wolffd@0: net
. This is a more efficient alternative to using glmerr
wolffd@0: and glmgrad
and a non-linear optimisation routine through
wolffd@0: netopt
.
wolffd@0: Note that for linear outputs, a single pass through the
wolffd@0: algorithm is all that is required, since the error function is quadratic in
wolffd@0: the weights. The algorithm also handles scalar alpha
and beta
wolffd@0: terms. If you want to use more complicated priors, you should use
wolffd@0: general-purpose non-linear optimisation algorithms.
wolffd@0:
wolffd@0: For logistic and softmax outputs, general priors can be handled, although wolffd@0: this requires the pseudo-inverse of the Hessian, giving up the better wolffd@0: conditioning and some of the speed advantage of the normal form equations. wolffd@0: wolffd@0:
The error function value at the final set of weights is returned
wolffd@0: in options(8)
.
wolffd@0: Each row of x
corresponds to one
wolffd@0: input vector and each row of t
corresponds to one target vector.
wolffd@0:
wolffd@0:
The optional parameters have the following interpretations. wolffd@0: wolffd@0:
options(1)
is set to 1 to display error values during training.
wolffd@0: If options(1)
is set to 0,
wolffd@0: then only warning messages are displayed. If options(1)
is -1,
wolffd@0: then nothing is displayed.
wolffd@0:
wolffd@0:
options(2)
is a measure of the precision required for the value
wolffd@0: of the weights w
at the solution.
wolffd@0:
wolffd@0:
options(3)
is a measure of the precision required of the objective
wolffd@0: function at the solution. Both this and the previous condition must be
wolffd@0: satisfied for termination.
wolffd@0:
wolffd@0:
options(5)
is set to 1 if an approximation to the Hessian (which assumes
wolffd@0: that all outputs are independent) is used for softmax outputs. With the default
wolffd@0: value of 0 the exact Hessian (which is more expensive to compute) is used.
wolffd@0:
wolffd@0:
options(14)
is the maximum number of iterations for the IRLS algorithm;
wolffd@0: default 100.
wolffd@0:
wolffd@0:
glm
, glmerr
, glmgrad
Copyright (c) Ian T Nabney (1996-9) wolffd@0: wolffd@0: wolffd@0: wolffd@0: