Mercurial > hg > camir-aes2014
diff toolboxes/FullBNT-1.0.7/nethelp3.3/netopt.htm @ 0:e9a9cd732c1e tip
first hg version after svn
author | wolffd |
---|---|
date | Tue, 10 Feb 2015 15:05:51 +0000 |
parents | |
children |
line wrap: on
line diff
--- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/toolboxes/FullBNT-1.0.7/nethelp3.3/netopt.htm Tue Feb 10 15:05:51 2015 +0000 @@ -0,0 +1,75 @@ +<html> +<head> +<title> +Netlab Reference Manual netopt +</title> +</head> +<body> +<H1> netopt +</H1> +<h2> +Purpose +</h2> +Optimize the weights in a network model. + +<p><h2> +Synopsis +</h2> +<PRE> +[net, options] = netopt(net, options, x, t, alg) +[net, options, varargout] = netopt(net, options, x, t, alg) +</PRE> + + +<p><h2> +Description +</h2> + +<p><CODE>netopt</CODE> is a helper function which facilitates the training of +networks using the general purpose optimizers as well as sampling from the +posterior distribution of parameters using general purpose Markov chain +Monte Carlo sampling algorithms. It can be used with any function that +searches in parameter space using error and gradient functions. + +<p><CODE>[net, options] = netopt(net, options, x, t, alg)</CODE> takes a network +data structure <CODE>net</CODE>, together with a vector <CODE>options</CODE> of +parameters governing the behaviour of the optimization algorithm, a +matrix <CODE>x</CODE> of input vectors and a matrix <CODE>t</CODE> of target +vectors, and returns the trained network as well as an updated +<CODE>options</CODE> vector. The string <CODE>alg</CODE> determines which optimization +algorithm (<CODE>conjgrad</CODE>, <CODE>quasinew</CODE>, <CODE>scg</CODE>, etc.) or Monte +Carlo algorithm (such as <CODE>hmc</CODE>) will be used. + +<p><CODE>[net, options, varargout] = netopt(net, options, x, t, alg)</CODE> +also returns any additional return values from the optimisation algorithm. + +<p><h2> +Examples +</h2> +Suppose we create a 4-input, 3 hidden unit, 2-output feed-forward +network using <CODE>net = mlp(4, 3, 2, 'linear')</CODE>. We can then train +the network with the scaled conjugate gradient algorithm by using +<CODE>net = netopt(net, options, x, t, 'scg')</CODE> where <CODE>x</CODE> and +<CODE>t</CODE> are the input and target data matrices respectively, and the +options vector is set appropriately for <CODE>scg</CODE>. + +<p>If we also wish to plot the learning curve, we can use the additional +return value <CODE>errlog</CODE> given by <CODE>scg</CODE>: +<PRE> + +[net, options, errlog] = netopt(net, options, x, t, 'scg'); +</PRE> + + +<p><h2> +See Also +</h2> +<CODE><a href="netgrad.htm">netgrad</a></CODE>, <CODE><a href="bfgs.htm">bfgs</a></CODE>, <CODE><a href="conjgrad.htm">conjgrad</a></CODE>, <CODE><a href="graddesc.htm">graddesc</a></CODE>, <CODE><a href="hmc.htm">hmc</a></CODE>, <CODE><a href="scg.htm">scg</a></CODE><hr> +<b>Pages:</b> +<a href="index.htm">Index</a> +<hr> +<p>Copyright (c) Ian T Nabney (1996-9) + + +</body> +</html> \ No newline at end of file