annotate toolboxes/FullBNT-1.0.7/nethelp3.3/netopt.htm @ 0:e9a9cd732c1e tip

first hg version after svn
author wolffd
date Tue, 10 Feb 2015 15:05:51 +0000
parents
children
rev   line source
wolffd@0 1 <html>
wolffd@0 2 <head>
wolffd@0 3 <title>
wolffd@0 4 Netlab Reference Manual netopt
wolffd@0 5 </title>
wolffd@0 6 </head>
wolffd@0 7 <body>
wolffd@0 8 <H1> netopt
wolffd@0 9 </H1>
wolffd@0 10 <h2>
wolffd@0 11 Purpose
wolffd@0 12 </h2>
wolffd@0 13 Optimize the weights in a network model.
wolffd@0 14
wolffd@0 15 <p><h2>
wolffd@0 16 Synopsis
wolffd@0 17 </h2>
wolffd@0 18 <PRE>
wolffd@0 19 [net, options] = netopt(net, options, x, t, alg)
wolffd@0 20 [net, options, varargout] = netopt(net, options, x, t, alg)
wolffd@0 21 </PRE>
wolffd@0 22
wolffd@0 23
wolffd@0 24 <p><h2>
wolffd@0 25 Description
wolffd@0 26 </h2>
wolffd@0 27
wolffd@0 28 <p><CODE>netopt</CODE> is a helper function which facilitates the training of
wolffd@0 29 networks using the general purpose optimizers as well as sampling from the
wolffd@0 30 posterior distribution of parameters using general purpose Markov chain
wolffd@0 31 Monte Carlo sampling algorithms. It can be used with any function that
wolffd@0 32 searches in parameter space using error and gradient functions.
wolffd@0 33
wolffd@0 34 <p><CODE>[net, options] = netopt(net, options, x, t, alg)</CODE> takes a network
wolffd@0 35 data structure <CODE>net</CODE>, together with a vector <CODE>options</CODE> of
wolffd@0 36 parameters governing the behaviour of the optimization algorithm, a
wolffd@0 37 matrix <CODE>x</CODE> of input vectors and a matrix <CODE>t</CODE> of target
wolffd@0 38 vectors, and returns the trained network as well as an updated
wolffd@0 39 <CODE>options</CODE> vector. The string <CODE>alg</CODE> determines which optimization
wolffd@0 40 algorithm (<CODE>conjgrad</CODE>, <CODE>quasinew</CODE>, <CODE>scg</CODE>, etc.) or Monte
wolffd@0 41 Carlo algorithm (such as <CODE>hmc</CODE>) will be used.
wolffd@0 42
wolffd@0 43 <p><CODE>[net, options, varargout] = netopt(net, options, x, t, alg)</CODE>
wolffd@0 44 also returns any additional return values from the optimisation algorithm.
wolffd@0 45
wolffd@0 46 <p><h2>
wolffd@0 47 Examples
wolffd@0 48 </h2>
wolffd@0 49 Suppose we create a 4-input, 3 hidden unit, 2-output feed-forward
wolffd@0 50 network using <CODE>net = mlp(4, 3, 2, 'linear')</CODE>. We can then train
wolffd@0 51 the network with the scaled conjugate gradient algorithm by using
wolffd@0 52 <CODE>net = netopt(net, options, x, t, 'scg')</CODE> where <CODE>x</CODE> and
wolffd@0 53 <CODE>t</CODE> are the input and target data matrices respectively, and the
wolffd@0 54 options vector is set appropriately for <CODE>scg</CODE>.
wolffd@0 55
wolffd@0 56 <p>If we also wish to plot the learning curve, we can use the additional
wolffd@0 57 return value <CODE>errlog</CODE> given by <CODE>scg</CODE>:
wolffd@0 58 <PRE>
wolffd@0 59
wolffd@0 60 [net, options, errlog] = netopt(net, options, x, t, 'scg');
wolffd@0 61 </PRE>
wolffd@0 62
wolffd@0 63
wolffd@0 64 <p><h2>
wolffd@0 65 See Also
wolffd@0 66 </h2>
wolffd@0 67 <CODE><a href="netgrad.htm">netgrad</a></CODE>, <CODE><a href="bfgs.htm">bfgs</a></CODE>, <CODE><a href="conjgrad.htm">conjgrad</a></CODE>, <CODE><a href="graddesc.htm">graddesc</a></CODE>, <CODE><a href="hmc.htm">hmc</a></CODE>, <CODE><a href="scg.htm">scg</a></CODE><hr>
wolffd@0 68 <b>Pages:</b>
wolffd@0 69 <a href="index.htm">Index</a>
wolffd@0 70 <hr>
wolffd@0 71 <p>Copyright (c) Ian T Nabney (1996-9)
wolffd@0 72
wolffd@0 73
wolffd@0 74 </body>
wolffd@0 75 </html>