Mercurial > hg > camir-aes2014
annotate toolboxes/FullBNT-1.0.7/nethelp3.3/demmlp1.htm @ 0:e9a9cd732c1e tip
first hg version after svn
author | wolffd |
---|---|
date | Tue, 10 Feb 2015 15:05:51 +0000 |
parents | |
children |
rev | line source |
---|---|
wolffd@0 | 1 <html> |
wolffd@0 | 2 <head> |
wolffd@0 | 3 <title> |
wolffd@0 | 4 Netlab Reference Manual demmlp1 |
wolffd@0 | 5 </title> |
wolffd@0 | 6 </head> |
wolffd@0 | 7 <body> |
wolffd@0 | 8 <H1> demmlp1 |
wolffd@0 | 9 </H1> |
wolffd@0 | 10 <h2> |
wolffd@0 | 11 Purpose |
wolffd@0 | 12 </h2> |
wolffd@0 | 13 Demonstrate simple regression using a multi-layer perceptron |
wolffd@0 | 14 |
wolffd@0 | 15 <p><h2> |
wolffd@0 | 16 Synopsis |
wolffd@0 | 17 </h2> |
wolffd@0 | 18 <PRE> |
wolffd@0 | 19 demmlp1</PRE> |
wolffd@0 | 20 |
wolffd@0 | 21 |
wolffd@0 | 22 <p><h2> |
wolffd@0 | 23 Description |
wolffd@0 | 24 </h2> |
wolffd@0 | 25 The problem consists of one input variable <CODE>x</CODE> and one target variable |
wolffd@0 | 26 <CODE>t</CODE> with data generated by sampling <CODE>x</CODE> at equal intervals and then |
wolffd@0 | 27 generating target data by computing <CODE>sin(2*pi*x)</CODE> and adding Gaussian |
wolffd@0 | 28 noise. A 2-layer network with linear outputs is trained by minimizing a |
wolffd@0 | 29 sum-of-squares error function using the scaled conjugate gradient optimizer. |
wolffd@0 | 30 |
wolffd@0 | 31 <p><h2> |
wolffd@0 | 32 See Also |
wolffd@0 | 33 </h2> |
wolffd@0 | 34 <CODE><a href="mlp.htm">mlp</a></CODE>, <CODE><a href="mlperr.htm">mlperr</a></CODE>, <CODE><a href="mlpgrad.htm">mlpgrad</a></CODE>, <CODE><a href="scg.htm">scg</a></CODE><hr> |
wolffd@0 | 35 <b>Pages:</b> |
wolffd@0 | 36 <a href="index.htm">Index</a> |
wolffd@0 | 37 <hr> |
wolffd@0 | 38 <p>Copyright (c) Ian T Nabney (1996-9) |
wolffd@0 | 39 |
wolffd@0 | 40 |
wolffd@0 | 41 </body> |
wolffd@0 | 42 </html> |