annotate toolboxes/FullBNT-1.0.7/nethelp3.3/mlp.htm @ 0:e9a9cd732c1e tip

first hg version after svn
author wolffd
date Tue, 10 Feb 2015 15:05:51 +0000
parents
children
rev   line source
wolffd@0 1 <html>
wolffd@0 2 <head>
wolffd@0 3 <title>
wolffd@0 4 Netlab Reference Manual mlp
wolffd@0 5 </title>
wolffd@0 6 </head>
wolffd@0 7 <body>
wolffd@0 8 <H1> mlp
wolffd@0 9 </H1>
wolffd@0 10 <h2>
wolffd@0 11 Purpose
wolffd@0 12 </h2>
wolffd@0 13 Create a 2-layer feedforward network.
wolffd@0 14
wolffd@0 15 <p><h2>
wolffd@0 16 Synopsis
wolffd@0 17 </h2>
wolffd@0 18 <PRE>
wolffd@0 19 net = mlp(nin, nhidden, nout, func)
wolffd@0 20 net = mlp(nin, nhidden, nout, func, prior)
wolffd@0 21 net = mlp(nin, nhidden, nout, func, prior, beta)
wolffd@0 22 </PRE>
wolffd@0 23
wolffd@0 24
wolffd@0 25 <p><h2>
wolffd@0 26 Description
wolffd@0 27 </h2>
wolffd@0 28 <CODE>net = mlp(nin, nhidden, nout, func)</CODE> takes the number of inputs,
wolffd@0 29 hidden units and output units for a 2-layer feed-forward network,
wolffd@0 30 together with a string <CODE>func</CODE> which specifies the output unit
wolffd@0 31 activation function, and returns a data structure <CODE>net</CODE>. The
wolffd@0 32 weights are drawn from a zero mean, unit variance isotropic Gaussian,
wolffd@0 33 with varianced scaled by the fan-in of the hidden or output units as
wolffd@0 34 appropriate. This makes use of the Matlab function
wolffd@0 35 <CODE>randn</CODE> and so the seed for the random weight initialization can be
wolffd@0 36 set using <CODE>randn('state', s)</CODE> where <CODE>s</CODE> is the seed value.
wolffd@0 37 The hidden units use the <CODE>tanh</CODE> activation function.
wolffd@0 38
wolffd@0 39 <p>The fields in <CODE>net</CODE> are
wolffd@0 40 <PRE>
wolffd@0 41
wolffd@0 42 type = 'mlp'
wolffd@0 43 nin = number of inputs
wolffd@0 44 nhidden = number of hidden units
wolffd@0 45 nout = number of outputs
wolffd@0 46 nwts = total number of weights and biases
wolffd@0 47 actfn = string describing the output unit activation function:
wolffd@0 48 'linear'
wolffd@0 49 'logistic
wolffd@0 50 'softmax'
wolffd@0 51 w1 = first-layer weight matrix
wolffd@0 52 b1 = first-layer bias vector
wolffd@0 53 w2 = second-layer weight matrix
wolffd@0 54 b2 = second-layer bias vector
wolffd@0 55 </PRE>
wolffd@0 56
wolffd@0 57 Here <CODE>w1</CODE> has dimensions <CODE>nin</CODE> times <CODE>nhidden</CODE>, <CODE>b1</CODE> has
wolffd@0 58 dimensions <CODE>1</CODE> times <CODE>nhidden</CODE>, <CODE>w2</CODE> has
wolffd@0 59 dimensions <CODE>nhidden</CODE> times <CODE>nout</CODE>, and <CODE>b2</CODE> has
wolffd@0 60 dimensions <CODE>1</CODE> times <CODE>nout</CODE>.
wolffd@0 61
wolffd@0 62 <p><CODE>net = mlp(nin, nhidden, nout, func, prior)</CODE>, in which <CODE>prior</CODE> is
wolffd@0 63 a scalar, allows the field <CODE>net.alpha</CODE> in the data structure
wolffd@0 64 <CODE>net</CODE> to be set, corresponding to a zero-mean isotropic Gaussian
wolffd@0 65 prior with inverse variance with value <CODE>prior</CODE>. Alternatively,
wolffd@0 66 <CODE>prior</CODE> can consist of a data structure with fields <CODE>alpha</CODE>
wolffd@0 67 and <CODE>index</CODE>, allowing individual Gaussian priors to be set over
wolffd@0 68 groups of weights in the network. Here <CODE>alpha</CODE> is a column vector
wolffd@0 69 in which each element corresponds to a separate group of weights,
wolffd@0 70 which need not be mutually exclusive. The membership of the groups is
wolffd@0 71 defined by the matrix <CODE>indx</CODE> in which the columns correspond to
wolffd@0 72 the elements of <CODE>alpha</CODE>. Each column has one element for each
wolffd@0 73 weight in the matrix, in the order defined by the function
wolffd@0 74 <CODE>mlppak</CODE>, and each element is 1 or 0 according to whether the
wolffd@0 75 weight is a member of the corresponding group or not. A utility
wolffd@0 76 function <CODE>mlpprior</CODE> is provided to help in setting up the
wolffd@0 77 <CODE>prior</CODE> data structure.
wolffd@0 78
wolffd@0 79 <p><CODE>net = mlp(nin, nhidden, nout, func, prior, beta)</CODE> also sets the
wolffd@0 80 additional field <CODE>net.beta</CODE> in the data structure <CODE>net</CODE>, where
wolffd@0 81 beta corresponds to the inverse noise variance.
wolffd@0 82
wolffd@0 83 <p><h2>
wolffd@0 84 See Also
wolffd@0 85 </h2>
wolffd@0 86 <CODE><a href="mlpprior.htm">mlpprior</a></CODE>, <CODE><a href="mlppak.htm">mlppak</a></CODE>, <CODE><a href="mlpunpak.htm">mlpunpak</a></CODE>, <CODE><a href="mlpfwd.htm">mlpfwd</a></CODE>, <CODE><a href="mlperr.htm">mlperr</a></CODE>, <CODE><a href="mlpbkp.htm">mlpbkp</a></CODE>, <CODE><a href="mlpgrad.htm">mlpgrad</a></CODE><hr>
wolffd@0 87 <b>Pages:</b>
wolffd@0 88 <a href="index.htm">Index</a>
wolffd@0 89 <hr>
wolffd@0 90 <p>Copyright (c) Ian T Nabney (1996-9)
wolffd@0 91
wolffd@0 92
wolffd@0 93 </body>
wolffd@0 94 </html>