Mercurial > hg > camir-aes2014
comparison toolboxes/FullBNT-1.0.7/nethelp3.3/netopt.htm @ 0:e9a9cd732c1e tip
first hg version after svn
author | wolffd |
---|---|
date | Tue, 10 Feb 2015 15:05:51 +0000 |
parents | |
children |
comparison
equal
deleted
inserted
replaced
-1:000000000000 | 0:e9a9cd732c1e |
---|---|
1 <html> | |
2 <head> | |
3 <title> | |
4 Netlab Reference Manual netopt | |
5 </title> | |
6 </head> | |
7 <body> | |
8 <H1> netopt | |
9 </H1> | |
10 <h2> | |
11 Purpose | |
12 </h2> | |
13 Optimize the weights in a network model. | |
14 | |
15 <p><h2> | |
16 Synopsis | |
17 </h2> | |
18 <PRE> | |
19 [net, options] = netopt(net, options, x, t, alg) | |
20 [net, options, varargout] = netopt(net, options, x, t, alg) | |
21 </PRE> | |
22 | |
23 | |
24 <p><h2> | |
25 Description | |
26 </h2> | |
27 | |
28 <p><CODE>netopt</CODE> is a helper function which facilitates the training of | |
29 networks using the general purpose optimizers as well as sampling from the | |
30 posterior distribution of parameters using general purpose Markov chain | |
31 Monte Carlo sampling algorithms. It can be used with any function that | |
32 searches in parameter space using error and gradient functions. | |
33 | |
34 <p><CODE>[net, options] = netopt(net, options, x, t, alg)</CODE> takes a network | |
35 data structure <CODE>net</CODE>, together with a vector <CODE>options</CODE> of | |
36 parameters governing the behaviour of the optimization algorithm, a | |
37 matrix <CODE>x</CODE> of input vectors and a matrix <CODE>t</CODE> of target | |
38 vectors, and returns the trained network as well as an updated | |
39 <CODE>options</CODE> vector. The string <CODE>alg</CODE> determines which optimization | |
40 algorithm (<CODE>conjgrad</CODE>, <CODE>quasinew</CODE>, <CODE>scg</CODE>, etc.) or Monte | |
41 Carlo algorithm (such as <CODE>hmc</CODE>) will be used. | |
42 | |
43 <p><CODE>[net, options, varargout] = netopt(net, options, x, t, alg)</CODE> | |
44 also returns any additional return values from the optimisation algorithm. | |
45 | |
46 <p><h2> | |
47 Examples | |
48 </h2> | |
49 Suppose we create a 4-input, 3 hidden unit, 2-output feed-forward | |
50 network using <CODE>net = mlp(4, 3, 2, 'linear')</CODE>. We can then train | |
51 the network with the scaled conjugate gradient algorithm by using | |
52 <CODE>net = netopt(net, options, x, t, 'scg')</CODE> where <CODE>x</CODE> and | |
53 <CODE>t</CODE> are the input and target data matrices respectively, and the | |
54 options vector is set appropriately for <CODE>scg</CODE>. | |
55 | |
56 <p>If we also wish to plot the learning curve, we can use the additional | |
57 return value <CODE>errlog</CODE> given by <CODE>scg</CODE>: | |
58 <PRE> | |
59 | |
60 [net, options, errlog] = netopt(net, options, x, t, 'scg'); | |
61 </PRE> | |
62 | |
63 | |
64 <p><h2> | |
65 See Also | |
66 </h2> | |
67 <CODE><a href="netgrad.htm">netgrad</a></CODE>, <CODE><a href="bfgs.htm">bfgs</a></CODE>, <CODE><a href="conjgrad.htm">conjgrad</a></CODE>, <CODE><a href="graddesc.htm">graddesc</a></CODE>, <CODE><a href="hmc.htm">hmc</a></CODE>, <CODE><a href="scg.htm">scg</a></CODE><hr> | |
68 <b>Pages:</b> | |
69 <a href="index.htm">Index</a> | |
70 <hr> | |
71 <p>Copyright (c) Ian T Nabney (1996-9) | |
72 | |
73 | |
74 </body> | |
75 </html> |