Mercurial > hg > camir-aes2014
comparison toolboxes/FullBNT-1.0.7/netlab3.3/glm.m @ 0:e9a9cd732c1e tip
first hg version after svn
author | wolffd |
---|---|
date | Tue, 10 Feb 2015 15:05:51 +0000 |
parents | |
children |
comparison
equal
deleted
inserted
replaced
-1:000000000000 | 0:e9a9cd732c1e |
---|---|
1 function net = glm(nin, nout, outfunc, prior, beta) | |
2 %GLM Create a generalized linear model. | |
3 % | |
4 % Description | |
5 % | |
6 % NET = GLM(NIN, NOUT, FUNC) takes the number of inputs and outputs for | |
7 % a generalized linear model, together with a string FUNC which | |
8 % specifies the output unit activation function, and returns a data | |
9 % structure NET. The weights are drawn from a zero mean, isotropic | |
10 % Gaussian, with variance scaled by the fan-in of the output units. | |
11 % This makes use of the Matlab function RANDN and so the seed for the | |
12 % random weight initialization can be set using RANDN('STATE', S) | |
13 % where S is the seed value. The optional argument ALPHA sets the | |
14 % inverse variance for the weight initialization. | |
15 % | |
16 % The fields in NET are | |
17 % type = 'glm' | |
18 % nin = number of inputs | |
19 % nout = number of outputs | |
20 % nwts = total number of weights and biases | |
21 % actfn = string describing the output unit activation function: | |
22 % 'linear' | |
23 % 'logistic' | |
24 % 'softmax' | |
25 % w1 = first-layer weight matrix | |
26 % b1 = first-layer bias vector | |
27 % | |
28 % NET = GLM(NIN, NOUT, FUNC, PRIOR), in which PRIOR is a scalar, allows | |
29 % the field NET.ALPHA in the data structure NET to be set, | |
30 % corresponding to a zero-mean isotropic Gaussian prior with inverse | |
31 % variance with value PRIOR. Alternatively, PRIOR can consist of a data | |
32 % structure with fields ALPHA and INDEX, allowing individual Gaussian | |
33 % priors to be set over groups of weights in the network. Here ALPHA is | |
34 % a column vector in which each element corresponds to a separate | |
35 % group of weights, which need not be mutually exclusive. The | |
36 % membership of the groups is defined by the matrix INDEX in which the | |
37 % columns correspond to the elements of ALPHA. Each column has one | |
38 % element for each weight in the matrix, in the order defined by the | |
39 % function GLMPAK, and each element is 1 or 0 according to whether the | |
40 % weight is a member of the corresponding group or not. | |
41 % | |
42 % NET = GLM(NIN, NOUT, FUNC, PRIOR, BETA) also sets the additional | |
43 % field NET.BETA in the data structure NET, where beta corresponds to | |
44 % the inverse noise variance. | |
45 % | |
46 % See also | |
47 % GLMPAK, GLMUNPAK, GLMFWD, GLMERR, GLMGRAD, GLMTRAIN | |
48 % | |
49 | |
50 % Copyright (c) Ian T Nabney (1996-2001) | |
51 | |
52 net.type = 'glm'; | |
53 net.nin = nin; | |
54 net.nout = nout; | |
55 net.nwts = (nin + 1)*nout; | |
56 | |
57 outtfns = {'linear', 'logistic', 'softmax'}; | |
58 | |
59 if sum(strcmp(outfunc, outtfns)) == 0 | |
60 error('Undefined activation function. Exiting.'); | |
61 else | |
62 net.outfn = outfunc; | |
63 end | |
64 | |
65 if nargin > 3 | |
66 if isstruct(prior) | |
67 net.alpha = prior.alpha; | |
68 net.index = prior.index; | |
69 elseif size(prior) == [1 1] | |
70 net.alpha = prior; | |
71 else | |
72 error('prior must be a scalar or structure'); | |
73 end | |
74 end | |
75 | |
76 net.w1 = randn(nin, nout)/sqrt(nin + 1); | |
77 net.b1 = randn(1, nout)/sqrt(nin + 1); | |
78 | |
79 if nargin == 5 | |
80 net.beta = beta; | |
81 end | |
82 |