wolffd@0: function net = rbf(nin, nhidden, nout, rbfunc, outfunc, prior, beta) wolffd@0: %RBF Creates an RBF network with specified architecture wolffd@0: % wolffd@0: % Description wolffd@0: % NET = RBF(NIN, NHIDDEN, NOUT, RBFUNC) constructs and initialises a wolffd@0: % radial basis function network returning a data structure NET. The wolffd@0: % weights are all initialised with a zero mean, unit variance normal wolffd@0: % distribution, with the exception of the variances, which are set to wolffd@0: % one. This makes use of the Matlab function RANDN and so the seed for wolffd@0: % the random weight initialization can be set using RANDN('STATE', S) wolffd@0: % where S is the seed value. The activation functions are defined in wolffd@0: % terms of the distance between the data point and the corresponding wolffd@0: % centre. Note that the functions are computed to a convenient wolffd@0: % constant multiple: for example, the Gaussian is not normalised. wolffd@0: % (Normalisation is not needed as the function outputs are linearly wolffd@0: % combined in the next layer.) wolffd@0: % wolffd@0: % The fields in NET are wolffd@0: % type = 'rbf' wolffd@0: % nin = number of inputs wolffd@0: % nhidden = number of hidden units wolffd@0: % nout = number of outputs wolffd@0: % nwts = total number of weights and biases wolffd@0: % actfn = string defining hidden unit activation function: wolffd@0: % 'gaussian' for a radially symmetric Gaussian function. wolffd@0: % 'tps' for r^2 log r, the thin plate spline function. wolffd@0: % 'r4logr' for r^4 log r. wolffd@0: % outfn = string defining output error function: wolffd@0: % 'linear' for linear outputs (default) and SoS error. wolffd@0: % 'neuroscale' for Sammon stress measure. wolffd@0: % c = centres wolffd@0: % wi = squared widths (null for rlogr and tps) wolffd@0: % w2 = second layer weight matrix wolffd@0: % b2 = second layer bias vector wolffd@0: % wolffd@0: % NET = RBF(NIN, NHIDDEN, NOUT, RBFUND, OUTFUNC) allows the user to wolffd@0: % specify the type of error function to be used. The field OUTFN is wolffd@0: % set to the value of this string. Linear outputs (for regression wolffd@0: % problems) and Neuroscale outputs (for topographic mappings) are wolffd@0: % supported. wolffd@0: % wolffd@0: % NET = RBF(NIN, NHIDDEN, NOUT, RBFUNC, OUTFUNC, PRIOR, BETA), in which wolffd@0: % PRIOR is a scalar, allows the field NET.ALPHA in the data structure wolffd@0: % NET to be set, corresponding to a zero-mean isotropic Gaussian prior wolffd@0: % with inverse variance with value PRIOR. Alternatively, PRIOR can wolffd@0: % consist of a data structure with fields ALPHA and INDEX, allowing wolffd@0: % individual Gaussian priors to be set over groups of weights in the wolffd@0: % network. Here ALPHA is a column vector in which each element wolffd@0: % corresponds to a separate group of weights, which need not be wolffd@0: % mutually exclusive. The membership of the groups is defined by the wolffd@0: % matrix INDX in which the columns correspond to the elements of ALPHA. wolffd@0: % Each column has one element for each weight in the matrix, in the wolffd@0: % order defined by the function RBFPAK, and each element is 1 or 0 wolffd@0: % according to whether the weight is a member of the corresponding wolffd@0: % group or not. A utility function RBFPRIOR is provided to help in wolffd@0: % setting up the PRIOR data structure. wolffd@0: % wolffd@0: % NET = RBF(NIN, NHIDDEN, NOUT, FUNC, PRIOR, BETA) also sets the wolffd@0: % additional field NET.BETA in the data structure NET, where beta wolffd@0: % corresponds to the inverse noise variance. wolffd@0: % wolffd@0: % See also wolffd@0: % RBFERR, RBFFWD, RBFGRAD, RBFPAK, RBFTRAIN, RBFUNPAK wolffd@0: % wolffd@0: wolffd@0: % Copyright (c) Ian T Nabney (1996-2001) wolffd@0: wolffd@0: net.type = 'rbf'; wolffd@0: net.nin = nin; wolffd@0: net.nhidden = nhidden; wolffd@0: net.nout = nout; wolffd@0: wolffd@0: % Check that function is an allowed type wolffd@0: actfns = {'gaussian', 'tps', 'r4logr'}; wolffd@0: outfns = {'linear', 'neuroscale'}; wolffd@0: if (strcmp(rbfunc, actfns)) == 0 wolffd@0: error('Undefined activation function.') wolffd@0: else wolffd@0: net.actfn = rbfunc; wolffd@0: end wolffd@0: if nargin <= 4 wolffd@0: net.outfn = outfns{1}; wolffd@0: elseif (strcmp(outfunc, outfns) == 0) wolffd@0: error('Undefined output function.') wolffd@0: else wolffd@0: net.outfn = outfunc; wolffd@0: end wolffd@0: wolffd@0: % Assume each function has a centre and a single width parameter, and that wolffd@0: % hidden layer to output weights include a bias. Only the Gaussian function wolffd@0: % requires a width wolffd@0: net.nwts = nin*nhidden + (nhidden + 1)*nout; wolffd@0: if strcmp(rbfunc, 'gaussian') wolffd@0: % Extra weights for width parameters wolffd@0: net.nwts = net.nwts + nhidden; wolffd@0: end wolffd@0: wolffd@0: if nargin > 5 wolffd@0: if isstruct(prior) wolffd@0: net.alpha = prior.alpha; wolffd@0: net.index = prior.index; wolffd@0: elseif size(prior) == [1 1] wolffd@0: net.alpha = prior; wolffd@0: else wolffd@0: error('prior must be a scalar or a structure'); wolffd@0: end wolffd@0: if nargin > 6 wolffd@0: net.beta = beta; wolffd@0: end wolffd@0: end wolffd@0: wolffd@0: w = randn(1, net.nwts); wolffd@0: net = rbfunpak(net, w); wolffd@0: wolffd@0: % Make widths equal to one wolffd@0: if strcmp(rbfunc, 'gaussian') wolffd@0: net.wi = ones(1, nhidden); wolffd@0: end wolffd@0: wolffd@0: if strcmp(net.outfn, 'neuroscale') wolffd@0: net.mask = rbfprior(rbfunc, nin, nhidden, nout); wolffd@0: end wolffd@0: