annotate toolboxes/FullBNT-1.0.7/netlab3.3/glm.m @ 0:cc4b1211e677 tip

initial commit to HG from Changeset: 646 (e263d8a21543) added further path and more save "camirversion.m"
author Daniel Wolff
date Fri, 19 Aug 2016 13:07:06 +0200
parents
children
rev   line source
Daniel@0 1 function net = glm(nin, nout, outfunc, prior, beta)
Daniel@0 2 %GLM Create a generalized linear model.
Daniel@0 3 %
Daniel@0 4 % Description
Daniel@0 5 %
Daniel@0 6 % NET = GLM(NIN, NOUT, FUNC) takes the number of inputs and outputs for
Daniel@0 7 % a generalized linear model, together with a string FUNC which
Daniel@0 8 % specifies the output unit activation function, and returns a data
Daniel@0 9 % structure NET. The weights are drawn from a zero mean, isotropic
Daniel@0 10 % Gaussian, with variance scaled by the fan-in of the output units.
Daniel@0 11 % This makes use of the Matlab function RANDN and so the seed for the
Daniel@0 12 % random weight initialization can be set using RANDN('STATE', S)
Daniel@0 13 % where S is the seed value. The optional argument ALPHA sets the
Daniel@0 14 % inverse variance for the weight initialization.
Daniel@0 15 %
Daniel@0 16 % The fields in NET are
Daniel@0 17 % type = 'glm'
Daniel@0 18 % nin = number of inputs
Daniel@0 19 % nout = number of outputs
Daniel@0 20 % nwts = total number of weights and biases
Daniel@0 21 % actfn = string describing the output unit activation function:
Daniel@0 22 % 'linear'
Daniel@0 23 % 'logistic'
Daniel@0 24 % 'softmax'
Daniel@0 25 % w1 = first-layer weight matrix
Daniel@0 26 % b1 = first-layer bias vector
Daniel@0 27 %
Daniel@0 28 % NET = GLM(NIN, NOUT, FUNC, PRIOR), in which PRIOR is a scalar, allows
Daniel@0 29 % the field NET.ALPHA in the data structure NET to be set,
Daniel@0 30 % corresponding to a zero-mean isotropic Gaussian prior with inverse
Daniel@0 31 % variance with value PRIOR. Alternatively, PRIOR can consist of a data
Daniel@0 32 % structure with fields ALPHA and INDEX, allowing individual Gaussian
Daniel@0 33 % priors to be set over groups of weights in the network. Here ALPHA is
Daniel@0 34 % a column vector in which each element corresponds to a separate
Daniel@0 35 % group of weights, which need not be mutually exclusive. The
Daniel@0 36 % membership of the groups is defined by the matrix INDEX in which the
Daniel@0 37 % columns correspond to the elements of ALPHA. Each column has one
Daniel@0 38 % element for each weight in the matrix, in the order defined by the
Daniel@0 39 % function GLMPAK, and each element is 1 or 0 according to whether the
Daniel@0 40 % weight is a member of the corresponding group or not.
Daniel@0 41 %
Daniel@0 42 % NET = GLM(NIN, NOUT, FUNC, PRIOR, BETA) also sets the additional
Daniel@0 43 % field NET.BETA in the data structure NET, where beta corresponds to
Daniel@0 44 % the inverse noise variance.
Daniel@0 45 %
Daniel@0 46 % See also
Daniel@0 47 % GLMPAK, GLMUNPAK, GLMFWD, GLMERR, GLMGRAD, GLMTRAIN
Daniel@0 48 %
Daniel@0 49
Daniel@0 50 % Copyright (c) Ian T Nabney (1996-2001)
Daniel@0 51
Daniel@0 52 net.type = 'glm';
Daniel@0 53 net.nin = nin;
Daniel@0 54 net.nout = nout;
Daniel@0 55 net.nwts = (nin + 1)*nout;
Daniel@0 56
Daniel@0 57 outtfns = {'linear', 'logistic', 'softmax'};
Daniel@0 58
Daniel@0 59 if sum(strcmp(outfunc, outtfns)) == 0
Daniel@0 60 error('Undefined activation function. Exiting.');
Daniel@0 61 else
Daniel@0 62 net.outfn = outfunc;
Daniel@0 63 end
Daniel@0 64
Daniel@0 65 if nargin > 3
Daniel@0 66 if isstruct(prior)
Daniel@0 67 net.alpha = prior.alpha;
Daniel@0 68 net.index = prior.index;
Daniel@0 69 elseif size(prior) == [1 1]
Daniel@0 70 net.alpha = prior;
Daniel@0 71 else
Daniel@0 72 error('prior must be a scalar or structure');
Daniel@0 73 end
Daniel@0 74 end
Daniel@0 75
Daniel@0 76 net.w1 = randn(nin, nout)/sqrt(nin + 1);
Daniel@0 77 net.b1 = randn(1, nout)/sqrt(nin + 1);
Daniel@0 78
Daniel@0 79 if nargin == 5
Daniel@0 80 net.beta = beta;
Daniel@0 81 end
Daniel@0 82