comparison toolboxes/FullBNT-1.0.7/nethelp3.3/glm.htm @ 0:e9a9cd732c1e tip

first hg version after svn
author wolffd
date Tue, 10 Feb 2015 15:05:51 +0000
parents
children
comparison
equal deleted inserted replaced
-1:000000000000 0:e9a9cd732c1e
1 <html>
2 <head>
3 <title>
4 Netlab Reference Manual glm
5 </title>
6 </head>
7 <body>
8 <H1> glm
9 </H1>
10 <h2>
11 Purpose
12 </h2>
13 Create a generalized linear model.
14
15 <p><h2>
16 Synopsis
17 </h2>
18 <PRE>
19 net = glm(nin, nout, func)
20 net = glm(nin, nout, func, prior)
21 net = glm(nin, nout, func, prior, beta)
22 </PRE>
23
24
25 <p><h2>
26 Description
27 </h2>
28
29 <p><CODE>net = glm(nin, nout, func)</CODE> takes the number of inputs
30 and outputs for a generalized linear model, together
31 with a string <CODE>func</CODE> which specifies the output unit activation function,
32 and returns a data structure <CODE>net</CODE>. The weights are drawn from a zero mean,
33 isotropic Gaussian, with variance scaled by the fan-in of the
34 output units. This makes use of the Matlab function
35 <CODE>randn</CODE> and so the seed for the random weight initialization can be
36 set using <CODE>randn('state', s)</CODE> where <CODE>s</CODE> is the seed value. The optional
37 argument <CODE>alpha</CODE> sets the inverse variance for the weight
38 initialization.
39
40 <p>The fields in <CODE>net</CODE> are
41 <PRE>
42 type = 'glm'
43 nin = number of inputs
44 nout = number of outputs
45 nwts = total number of weights and biases
46 actfn = string describing the output unit activation function:
47 'linear'
48 'logistic'
49 'softmax'
50 w1 = first-layer weight matrix
51 b1 = first-layer bias vector
52 </PRE>
53
54
55 <p><CODE>net = glm(nin, nout, func, prior)</CODE>, in which <CODE>prior</CODE> is
56 a scalar, allows the field
57 <CODE>net.alpha</CODE> in the data structure <CODE>net</CODE> to be set, corresponding
58 to a zero-mean isotropic Gaussian prior with inverse variance with
59 value <CODE>prior</CODE>. Alternatively, <CODE>prior</CODE> can consist of a data
60 structure with fields <CODE>alpha</CODE> and <CODE>index</CODE>, allowing individual
61 Gaussian priors to be set over groups of weights in the network. Here
62 <CODE>alpha</CODE> is a column vector in which each element corresponds to a
63 separate group of weights, which need not be mutually exclusive. The
64 membership of the groups is defined by the matrix <CODE>index</CODE> in which
65 the columns correspond to the elements of <CODE>alpha</CODE>. Each column has
66 one element for each weight in the matrix, in the order defined by the
67 function <CODE>glmpak</CODE>, and each element is 1 or 0 according to whether
68 the weight is a member of the corresponding group or not.
69
70 <p><CODE>net = glm(nin, nout, func, prior, beta)</CODE> also sets the
71 additional field <CODE>net.beta</CODE> in the data structure <CODE>net</CODE>, where
72 beta corresponds to the inverse noise variance.
73
74 <p><h2>
75 See Also
76 </h2>
77 <CODE><a href="glmpak.htm">glmpak</a></CODE>, <CODE><a href="glmunpak.htm">glmunpak</a></CODE>, <CODE><a href="glmfwd.htm">glmfwd</a></CODE>, <CODE><a href="glmerr.htm">glmerr</a></CODE>, <CODE><a href="glmgrad.htm">glmgrad</a></CODE>, <CODE><a href="glmtrain.htm">glmtrain</a></CODE><hr>
78 <b>Pages:</b>
79 <a href="index.htm">Index</a>
80 <hr>
81 <p>Copyright (c) Ian T Nabney (1996-9)
82
83
84 </body>
85 </html>