Mercurial > hg > camir-aes2014
comparison toolboxes/FullBNT-1.0.7/nethelp3.3/mlp.htm @ 0:e9a9cd732c1e tip
first hg version after svn
author | wolffd |
---|---|
date | Tue, 10 Feb 2015 15:05:51 +0000 |
parents | |
children |
comparison
equal
deleted
inserted
replaced
-1:000000000000 | 0:e9a9cd732c1e |
---|---|
1 <html> | |
2 <head> | |
3 <title> | |
4 Netlab Reference Manual mlp | |
5 </title> | |
6 </head> | |
7 <body> | |
8 <H1> mlp | |
9 </H1> | |
10 <h2> | |
11 Purpose | |
12 </h2> | |
13 Create a 2-layer feedforward network. | |
14 | |
15 <p><h2> | |
16 Synopsis | |
17 </h2> | |
18 <PRE> | |
19 net = mlp(nin, nhidden, nout, func) | |
20 net = mlp(nin, nhidden, nout, func, prior) | |
21 net = mlp(nin, nhidden, nout, func, prior, beta) | |
22 </PRE> | |
23 | |
24 | |
25 <p><h2> | |
26 Description | |
27 </h2> | |
28 <CODE>net = mlp(nin, nhidden, nout, func)</CODE> takes the number of inputs, | |
29 hidden units and output units for a 2-layer feed-forward network, | |
30 together with a string <CODE>func</CODE> which specifies the output unit | |
31 activation function, and returns a data structure <CODE>net</CODE>. The | |
32 weights are drawn from a zero mean, unit variance isotropic Gaussian, | |
33 with varianced scaled by the fan-in of the hidden or output units as | |
34 appropriate. This makes use of the Matlab function | |
35 <CODE>randn</CODE> and so the seed for the random weight initialization can be | |
36 set using <CODE>randn('state', s)</CODE> where <CODE>s</CODE> is the seed value. | |
37 The hidden units use the <CODE>tanh</CODE> activation function. | |
38 | |
39 <p>The fields in <CODE>net</CODE> are | |
40 <PRE> | |
41 | |
42 type = 'mlp' | |
43 nin = number of inputs | |
44 nhidden = number of hidden units | |
45 nout = number of outputs | |
46 nwts = total number of weights and biases | |
47 actfn = string describing the output unit activation function: | |
48 'linear' | |
49 'logistic | |
50 'softmax' | |
51 w1 = first-layer weight matrix | |
52 b1 = first-layer bias vector | |
53 w2 = second-layer weight matrix | |
54 b2 = second-layer bias vector | |
55 </PRE> | |
56 | |
57 Here <CODE>w1</CODE> has dimensions <CODE>nin</CODE> times <CODE>nhidden</CODE>, <CODE>b1</CODE> has | |
58 dimensions <CODE>1</CODE> times <CODE>nhidden</CODE>, <CODE>w2</CODE> has | |
59 dimensions <CODE>nhidden</CODE> times <CODE>nout</CODE>, and <CODE>b2</CODE> has | |
60 dimensions <CODE>1</CODE> times <CODE>nout</CODE>. | |
61 | |
62 <p><CODE>net = mlp(nin, nhidden, nout, func, prior)</CODE>, in which <CODE>prior</CODE> is | |
63 a scalar, allows the field <CODE>net.alpha</CODE> in the data structure | |
64 <CODE>net</CODE> to be set, corresponding to a zero-mean isotropic Gaussian | |
65 prior with inverse variance with value <CODE>prior</CODE>. Alternatively, | |
66 <CODE>prior</CODE> can consist of a data structure with fields <CODE>alpha</CODE> | |
67 and <CODE>index</CODE>, allowing individual Gaussian priors to be set over | |
68 groups of weights in the network. Here <CODE>alpha</CODE> is a column vector | |
69 in which each element corresponds to a separate group of weights, | |
70 which need not be mutually exclusive. The membership of the groups is | |
71 defined by the matrix <CODE>indx</CODE> in which the columns correspond to | |
72 the elements of <CODE>alpha</CODE>. Each column has one element for each | |
73 weight in the matrix, in the order defined by the function | |
74 <CODE>mlppak</CODE>, and each element is 1 or 0 according to whether the | |
75 weight is a member of the corresponding group or not. A utility | |
76 function <CODE>mlpprior</CODE> is provided to help in setting up the | |
77 <CODE>prior</CODE> data structure. | |
78 | |
79 <p><CODE>net = mlp(nin, nhidden, nout, func, prior, beta)</CODE> also sets the | |
80 additional field <CODE>net.beta</CODE> in the data structure <CODE>net</CODE>, where | |
81 beta corresponds to the inverse noise variance. | |
82 | |
83 <p><h2> | |
84 See Also | |
85 </h2> | |
86 <CODE><a href="mlpprior.htm">mlpprior</a></CODE>, <CODE><a href="mlppak.htm">mlppak</a></CODE>, <CODE><a href="mlpunpak.htm">mlpunpak</a></CODE>, <CODE><a href="mlpfwd.htm">mlpfwd</a></CODE>, <CODE><a href="mlperr.htm">mlperr</a></CODE>, <CODE><a href="mlpbkp.htm">mlpbkp</a></CODE>, <CODE><a href="mlpgrad.htm">mlpgrad</a></CODE><hr> | |
87 <b>Pages:</b> | |
88 <a href="index.htm">Index</a> | |
89 <hr> | |
90 <p>Copyright (c) Ian T Nabney (1996-9) | |
91 | |
92 | |
93 </body> | |
94 </html> |