Daniel@0: Daniel@0: Daniel@0: Daniel@0: Netlab Reference Manual netopt Daniel@0: Daniel@0: Daniel@0: Daniel@0:

netopt Daniel@0:

Daniel@0:

Daniel@0: Purpose Daniel@0:

Daniel@0: Optimize the weights in a network model. Daniel@0: Daniel@0:

Daniel@0: Synopsis Daniel@0:

Daniel@0:
Daniel@0: [net, options] = netopt(net, options, x, t, alg)
Daniel@0: [net, options, varargout] = netopt(net, options, x, t, alg)
Daniel@0: 
Daniel@0: Daniel@0: Daniel@0:

Daniel@0: Description Daniel@0:

Daniel@0: Daniel@0:

netopt is a helper function which facilitates the training of Daniel@0: networks using the general purpose optimizers as well as sampling from the Daniel@0: posterior distribution of parameters using general purpose Markov chain Daniel@0: Monte Carlo sampling algorithms. It can be used with any function that Daniel@0: searches in parameter space using error and gradient functions. Daniel@0: Daniel@0:

[net, options] = netopt(net, options, x, t, alg) takes a network Daniel@0: data structure net, together with a vector options of Daniel@0: parameters governing the behaviour of the optimization algorithm, a Daniel@0: matrix x of input vectors and a matrix t of target Daniel@0: vectors, and returns the trained network as well as an updated Daniel@0: options vector. The string alg determines which optimization Daniel@0: algorithm (conjgrad, quasinew, scg, etc.) or Monte Daniel@0: Carlo algorithm (such as hmc) will be used. Daniel@0: Daniel@0:

[net, options, varargout] = netopt(net, options, x, t, alg) Daniel@0: also returns any additional return values from the optimisation algorithm. Daniel@0: Daniel@0:

Daniel@0: Examples Daniel@0:

Daniel@0: Suppose we create a 4-input, 3 hidden unit, 2-output feed-forward Daniel@0: network using net = mlp(4, 3, 2, 'linear'). We can then train Daniel@0: the network with the scaled conjugate gradient algorithm by using Daniel@0: net = netopt(net, options, x, t, 'scg') where x and Daniel@0: t are the input and target data matrices respectively, and the Daniel@0: options vector is set appropriately for scg. Daniel@0: Daniel@0:

If we also wish to plot the learning curve, we can use the additional Daniel@0: return value errlog given by scg: Daniel@0:

Daniel@0: 
Daniel@0: [net, options, errlog] = netopt(net, options, x, t, 'scg');
Daniel@0: 
Daniel@0: Daniel@0: Daniel@0:

Daniel@0: See Also Daniel@0:

Daniel@0: netgrad, bfgs, conjgrad, graddesc, hmc, scg
Daniel@0: Pages: Daniel@0: Index Daniel@0:
Daniel@0:

Copyright (c) Ian T Nabney (1996-9) Daniel@0: Daniel@0: Daniel@0: Daniel@0: