annotate toolboxes/FullBNT-1.0.7/netlab3.3/rbf.m @ 0:cc4b1211e677 tip

initial commit to HG from Changeset: 646 (e263d8a21543) added further path and more save "camirversion.m"
author Daniel Wolff
date Fri, 19 Aug 2016 13:07:06 +0200
parents
children
rev   line source
Daniel@0 1 function net = rbf(nin, nhidden, nout, rbfunc, outfunc, prior, beta)
Daniel@0 2 %RBF Creates an RBF network with specified architecture
Daniel@0 3 %
Daniel@0 4 % Description
Daniel@0 5 % NET = RBF(NIN, NHIDDEN, NOUT, RBFUNC) constructs and initialises a
Daniel@0 6 % radial basis function network returning a data structure NET. The
Daniel@0 7 % weights are all initialised with a zero mean, unit variance normal
Daniel@0 8 % distribution, with the exception of the variances, which are set to
Daniel@0 9 % one. This makes use of the Matlab function RANDN and so the seed for
Daniel@0 10 % the random weight initialization can be set using RANDN('STATE', S)
Daniel@0 11 % where S is the seed value. The activation functions are defined in
Daniel@0 12 % terms of the distance between the data point and the corresponding
Daniel@0 13 % centre. Note that the functions are computed to a convenient
Daniel@0 14 % constant multiple: for example, the Gaussian is not normalised.
Daniel@0 15 % (Normalisation is not needed as the function outputs are linearly
Daniel@0 16 % combined in the next layer.)
Daniel@0 17 %
Daniel@0 18 % The fields in NET are
Daniel@0 19 % type = 'rbf'
Daniel@0 20 % nin = number of inputs
Daniel@0 21 % nhidden = number of hidden units
Daniel@0 22 % nout = number of outputs
Daniel@0 23 % nwts = total number of weights and biases
Daniel@0 24 % actfn = string defining hidden unit activation function:
Daniel@0 25 % 'gaussian' for a radially symmetric Gaussian function.
Daniel@0 26 % 'tps' for r^2 log r, the thin plate spline function.
Daniel@0 27 % 'r4logr' for r^4 log r.
Daniel@0 28 % outfn = string defining output error function:
Daniel@0 29 % 'linear' for linear outputs (default) and SoS error.
Daniel@0 30 % 'neuroscale' for Sammon stress measure.
Daniel@0 31 % c = centres
Daniel@0 32 % wi = squared widths (null for rlogr and tps)
Daniel@0 33 % w2 = second layer weight matrix
Daniel@0 34 % b2 = second layer bias vector
Daniel@0 35 %
Daniel@0 36 % NET = RBF(NIN, NHIDDEN, NOUT, RBFUND, OUTFUNC) allows the user to
Daniel@0 37 % specify the type of error function to be used. The field OUTFN is
Daniel@0 38 % set to the value of this string. Linear outputs (for regression
Daniel@0 39 % problems) and Neuroscale outputs (for topographic mappings) are
Daniel@0 40 % supported.
Daniel@0 41 %
Daniel@0 42 % NET = RBF(NIN, NHIDDEN, NOUT, RBFUNC, OUTFUNC, PRIOR, BETA), in which
Daniel@0 43 % PRIOR is a scalar, allows the field NET.ALPHA in the data structure
Daniel@0 44 % NET to be set, corresponding to a zero-mean isotropic Gaussian prior
Daniel@0 45 % with inverse variance with value PRIOR. Alternatively, PRIOR can
Daniel@0 46 % consist of a data structure with fields ALPHA and INDEX, allowing
Daniel@0 47 % individual Gaussian priors to be set over groups of weights in the
Daniel@0 48 % network. Here ALPHA is a column vector in which each element
Daniel@0 49 % corresponds to a separate group of weights, which need not be
Daniel@0 50 % mutually exclusive. The membership of the groups is defined by the
Daniel@0 51 % matrix INDX in which the columns correspond to the elements of ALPHA.
Daniel@0 52 % Each column has one element for each weight in the matrix, in the
Daniel@0 53 % order defined by the function RBFPAK, and each element is 1 or 0
Daniel@0 54 % according to whether the weight is a member of the corresponding
Daniel@0 55 % group or not. A utility function RBFPRIOR is provided to help in
Daniel@0 56 % setting up the PRIOR data structure.
Daniel@0 57 %
Daniel@0 58 % NET = RBF(NIN, NHIDDEN, NOUT, FUNC, PRIOR, BETA) also sets the
Daniel@0 59 % additional field NET.BETA in the data structure NET, where beta
Daniel@0 60 % corresponds to the inverse noise variance.
Daniel@0 61 %
Daniel@0 62 % See also
Daniel@0 63 % RBFERR, RBFFWD, RBFGRAD, RBFPAK, RBFTRAIN, RBFUNPAK
Daniel@0 64 %
Daniel@0 65
Daniel@0 66 % Copyright (c) Ian T Nabney (1996-2001)
Daniel@0 67
Daniel@0 68 net.type = 'rbf';
Daniel@0 69 net.nin = nin;
Daniel@0 70 net.nhidden = nhidden;
Daniel@0 71 net.nout = nout;
Daniel@0 72
Daniel@0 73 % Check that function is an allowed type
Daniel@0 74 actfns = {'gaussian', 'tps', 'r4logr'};
Daniel@0 75 outfns = {'linear', 'neuroscale'};
Daniel@0 76 if (strcmp(rbfunc, actfns)) == 0
Daniel@0 77 error('Undefined activation function.')
Daniel@0 78 else
Daniel@0 79 net.actfn = rbfunc;
Daniel@0 80 end
Daniel@0 81 if nargin <= 4
Daniel@0 82 net.outfn = outfns{1};
Daniel@0 83 elseif (strcmp(outfunc, outfns) == 0)
Daniel@0 84 error('Undefined output function.')
Daniel@0 85 else
Daniel@0 86 net.outfn = outfunc;
Daniel@0 87 end
Daniel@0 88
Daniel@0 89 % Assume each function has a centre and a single width parameter, and that
Daniel@0 90 % hidden layer to output weights include a bias. Only the Gaussian function
Daniel@0 91 % requires a width
Daniel@0 92 net.nwts = nin*nhidden + (nhidden + 1)*nout;
Daniel@0 93 if strcmp(rbfunc, 'gaussian')
Daniel@0 94 % Extra weights for width parameters
Daniel@0 95 net.nwts = net.nwts + nhidden;
Daniel@0 96 end
Daniel@0 97
Daniel@0 98 if nargin > 5
Daniel@0 99 if isstruct(prior)
Daniel@0 100 net.alpha = prior.alpha;
Daniel@0 101 net.index = prior.index;
Daniel@0 102 elseif size(prior) == [1 1]
Daniel@0 103 net.alpha = prior;
Daniel@0 104 else
Daniel@0 105 error('prior must be a scalar or a structure');
Daniel@0 106 end
Daniel@0 107 if nargin > 6
Daniel@0 108 net.beta = beta;
Daniel@0 109 end
Daniel@0 110 end
Daniel@0 111
Daniel@0 112 w = randn(1, net.nwts);
Daniel@0 113 net = rbfunpak(net, w);
Daniel@0 114
Daniel@0 115 % Make widths equal to one
Daniel@0 116 if strcmp(rbfunc, 'gaussian')
Daniel@0 117 net.wi = ones(1, nhidden);
Daniel@0 118 end
Daniel@0 119
Daniel@0 120 if strcmp(net.outfn, 'neuroscale')
Daniel@0 121 net.mask = rbfprior(rbfunc, nin, nhidden, nout);
Daniel@0 122 end
Daniel@0 123