wolffd@0: wolffd@0: wolffd@0: wolffd@0: Netlab Reference Manual netopt wolffd@0: wolffd@0: wolffd@0: wolffd@0:

netopt wolffd@0:

wolffd@0:

wolffd@0: Purpose wolffd@0:

wolffd@0: Optimize the weights in a network model. wolffd@0: wolffd@0:

wolffd@0: Synopsis wolffd@0:

wolffd@0:
wolffd@0: [net, options] = netopt(net, options, x, t, alg)
wolffd@0: [net, options, varargout] = netopt(net, options, x, t, alg)
wolffd@0: 
wolffd@0: wolffd@0: wolffd@0:

wolffd@0: Description wolffd@0:

wolffd@0: wolffd@0:

netopt is a helper function which facilitates the training of wolffd@0: networks using the general purpose optimizers as well as sampling from the wolffd@0: posterior distribution of parameters using general purpose Markov chain wolffd@0: Monte Carlo sampling algorithms. It can be used with any function that wolffd@0: searches in parameter space using error and gradient functions. wolffd@0: wolffd@0:

[net, options] = netopt(net, options, x, t, alg) takes a network wolffd@0: data structure net, together with a vector options of wolffd@0: parameters governing the behaviour of the optimization algorithm, a wolffd@0: matrix x of input vectors and a matrix t of target wolffd@0: vectors, and returns the trained network as well as an updated wolffd@0: options vector. The string alg determines which optimization wolffd@0: algorithm (conjgrad, quasinew, scg, etc.) or Monte wolffd@0: Carlo algorithm (such as hmc) will be used. wolffd@0: wolffd@0:

[net, options, varargout] = netopt(net, options, x, t, alg) wolffd@0: also returns any additional return values from the optimisation algorithm. wolffd@0: wolffd@0:

wolffd@0: Examples wolffd@0:

wolffd@0: Suppose we create a 4-input, 3 hidden unit, 2-output feed-forward wolffd@0: network using net = mlp(4, 3, 2, 'linear'). We can then train wolffd@0: the network with the scaled conjugate gradient algorithm by using wolffd@0: net = netopt(net, options, x, t, 'scg') where x and wolffd@0: t are the input and target data matrices respectively, and the wolffd@0: options vector is set appropriately for scg. wolffd@0: wolffd@0:

If we also wish to plot the learning curve, we can use the additional wolffd@0: return value errlog given by scg: wolffd@0:

wolffd@0: 
wolffd@0: [net, options, errlog] = netopt(net, options, x, t, 'scg');
wolffd@0: 
wolffd@0: wolffd@0: wolffd@0:

wolffd@0: See Also wolffd@0:

wolffd@0: netgrad, bfgs, conjgrad, graddesc, hmc, scg
wolffd@0: Pages: wolffd@0: Index wolffd@0:
wolffd@0:

Copyright (c) Ian T Nabney (1996-9) wolffd@0: wolffd@0: wolffd@0: wolffd@0: