Daniel@0: Daniel@0: Daniel@0: Daniel@0: Netlab Reference Manual mlp Daniel@0: Daniel@0: Daniel@0: Daniel@0:

mlp Daniel@0:

Daniel@0:

Daniel@0: Purpose Daniel@0:

Daniel@0: Create a 2-layer feedforward network. Daniel@0: Daniel@0:

Daniel@0: Synopsis Daniel@0:

Daniel@0:
Daniel@0: net = mlp(nin, nhidden, nout, func)
Daniel@0: net = mlp(nin, nhidden, nout, func, prior)
Daniel@0: net = mlp(nin, nhidden, nout, func, prior, beta)
Daniel@0: 
Daniel@0: Daniel@0: Daniel@0:

Daniel@0: Description Daniel@0:

Daniel@0: net = mlp(nin, nhidden, nout, func) takes the number of inputs, Daniel@0: hidden units and output units for a 2-layer feed-forward network, Daniel@0: together with a string func which specifies the output unit Daniel@0: activation function, and returns a data structure net. The Daniel@0: weights are drawn from a zero mean, unit variance isotropic Gaussian, Daniel@0: with varianced scaled by the fan-in of the hidden or output units as Daniel@0: appropriate. This makes use of the Matlab function Daniel@0: randn and so the seed for the random weight initialization can be Daniel@0: set using randn('state', s) where s is the seed value. Daniel@0: The hidden units use the tanh activation function. Daniel@0: Daniel@0:

The fields in net are Daniel@0:

Daniel@0: 
Daniel@0:   type = 'mlp'
Daniel@0:   nin = number of inputs
Daniel@0:   nhidden = number of hidden units
Daniel@0:   nout = number of outputs
Daniel@0:   nwts = total number of weights and biases
Daniel@0:   actfn = string describing the output unit activation function:
Daniel@0:       'linear'
Daniel@0:       'logistic
Daniel@0:       'softmax'
Daniel@0:   w1 = first-layer weight matrix
Daniel@0:   b1 = first-layer bias vector
Daniel@0:   w2 = second-layer weight matrix
Daniel@0:   b2 = second-layer bias vector
Daniel@0: 
Daniel@0: Daniel@0: Here w1 has dimensions nin times nhidden, b1 has Daniel@0: dimensions 1 times nhidden, w2 has Daniel@0: dimensions nhidden times nout, and b2 has Daniel@0: dimensions 1 times nout. Daniel@0: Daniel@0:

net = mlp(nin, nhidden, nout, func, prior), in which prior is Daniel@0: a scalar, allows the field net.alpha in the data structure Daniel@0: net to be set, corresponding to a zero-mean isotropic Gaussian Daniel@0: prior with inverse variance with value prior. Alternatively, Daniel@0: prior can consist of a data structure with fields alpha Daniel@0: and index, allowing individual Gaussian priors to be set over Daniel@0: groups of weights in the network. Here alpha is a column vector Daniel@0: in which each element corresponds to a separate group of weights, Daniel@0: which need not be mutually exclusive. The membership of the groups is Daniel@0: defined by the matrix indx in which the columns correspond to Daniel@0: the elements of alpha. Each column has one element for each Daniel@0: weight in the matrix, in the order defined by the function Daniel@0: mlppak, and each element is 1 or 0 according to whether the Daniel@0: weight is a member of the corresponding group or not. A utility Daniel@0: function mlpprior is provided to help in setting up the Daniel@0: prior data structure. Daniel@0: Daniel@0:

net = mlp(nin, nhidden, nout, func, prior, beta) also sets the Daniel@0: additional field net.beta in the data structure net, where Daniel@0: beta corresponds to the inverse noise variance. Daniel@0: Daniel@0:

Daniel@0: See Also Daniel@0:

Daniel@0: mlpprior, mlppak, mlpunpak, mlpfwd, mlperr, mlpbkp, mlpgrad
Daniel@0: Pages: Daniel@0: Index Daniel@0:
Daniel@0:

Copyright (c) Ian T Nabney (1996-9) Daniel@0: Daniel@0: Daniel@0: Daniel@0: