annotate toolboxes/FullBNT-1.0.7/nethelp3.3/demev1.htm @ 0:e9a9cd732c1e tip

first hg version after svn
author wolffd
date Tue, 10 Feb 2015 15:05:51 +0000
parents
children
rev   line source
wolffd@0 1 <html>
wolffd@0 2 <head>
wolffd@0 3 <title>
wolffd@0 4 Netlab Reference Manual demev1
wolffd@0 5 </title>
wolffd@0 6 </head>
wolffd@0 7 <body>
wolffd@0 8 <H1> demev1
wolffd@0 9 </H1>
wolffd@0 10 <h2>
wolffd@0 11 Purpose
wolffd@0 12 </h2>
wolffd@0 13 Demonstrate Bayesian regression for the MLP.
wolffd@0 14
wolffd@0 15 <p><h2>
wolffd@0 16 Synopsis
wolffd@0 17 </h2>
wolffd@0 18 <PRE>
wolffd@0 19 demev1</PRE>
wolffd@0 20
wolffd@0 21
wolffd@0 22 <p><h2>
wolffd@0 23 Description
wolffd@0 24 </h2>
wolffd@0 25 The problem consists an input variable <CODE>x</CODE> which sampled from a
wolffd@0 26 Gaussian distribution, and a target variable <CODE>t</CODE> generated by
wolffd@0 27 computing <CODE>sin(2*pi*x)</CODE> and adding Gaussian noise. A 2-layer
wolffd@0 28 network with linear outputs is trained by minimizing a sum-of-squares
wolffd@0 29 error function with isotropic Gaussian regularizer, using the scaled
wolffd@0 30 conjugate gradient optimizer. The hyperparameters <CODE>alpha</CODE> and
wolffd@0 31 <CODE>beta</CODE> are re-estimated using the function <CODE>evidence</CODE>. A graph
wolffd@0 32 is plotted of the original function, the training data, the trained
wolffd@0 33 network function, and the error bars.
wolffd@0 34
wolffd@0 35 <p><h2>
wolffd@0 36 See Also
wolffd@0 37 </h2>
wolffd@0 38 <CODE><a href="evidence.htm">evidence</a></CODE>, <CODE><a href="mlp.htm">mlp</a></CODE>, <CODE><a href="scg.htm">scg</a></CODE>, <CODE><a href="demard.htm">demard</a></CODE>, <CODE><a href="demmlp1.htm">demmlp1</a></CODE><hr>
wolffd@0 39 <b>Pages:</b>
wolffd@0 40 <a href="index.htm">Index</a>
wolffd@0 41 <hr>
wolffd@0 42 <p>Copyright (c) Ian T Nabney (1996-9)
wolffd@0 43
wolffd@0 44
wolffd@0 45 </body>
wolffd@0 46 </html>