annotate toolboxes/FullBNT-1.0.7/nethelp3.3/rbf.htm @ 0:e9a9cd732c1e tip

first hg version after svn
author wolffd
date Tue, 10 Feb 2015 15:05:51 +0000
parents
children
rev   line source
wolffd@0 1 <html>
wolffd@0 2 <head>
wolffd@0 3 <title>
wolffd@0 4 Netlab Reference Manual rbf
wolffd@0 5 </title>
wolffd@0 6 </head>
wolffd@0 7 <body>
wolffd@0 8 <H1> rbf
wolffd@0 9 </H1>
wolffd@0 10 <h2>
wolffd@0 11 Purpose
wolffd@0 12 </h2>
wolffd@0 13 Creates an RBF network with specified architecture
wolffd@0 14
wolffd@0 15 <p><h2>
wolffd@0 16 Synopsis
wolffd@0 17 </h2>
wolffd@0 18 <PRE>
wolffd@0 19
wolffd@0 20 net = rbf(nin, nhidden, nout, rbfunc)
wolffd@0 21 net = rbf(nin, nhidden, nout, rbfunc, outfunc)
wolffd@0 22 net = rbf(nin, nhidden, nout, rbfunc, outfunc, prior, beta)
wolffd@0 23 </PRE>
wolffd@0 24
wolffd@0 25
wolffd@0 26 <p><h2>
wolffd@0 27 Description
wolffd@0 28 </h2>
wolffd@0 29 <CODE>net = rbf(nin, nhidden, nout, rbfunc)</CODE> constructs and initialises
wolffd@0 30 a radial basis function network returning a data structure <CODE>net</CODE>.
wolffd@0 31 The weights are all initialised with a zero mean, unit variance normal
wolffd@0 32 distribution, with the exception of the variances, which are set to one.
wolffd@0 33 This makes use of the Matlab function
wolffd@0 34 <CODE>randn</CODE> and so the seed for the random weight initialization can be
wolffd@0 35 set using <CODE>randn('state', s)</CODE> where <CODE>s</CODE> is the seed value. The
wolffd@0 36 activation functions are defined in terms of the distance between
wolffd@0 37 the data point and the corresponding centre. Note that the functions are
wolffd@0 38 computed to a convenient constant multiple: for example, the Gaussian
wolffd@0 39 is not normalised. (Normalisation is not needed as the function outputs
wolffd@0 40 are linearly combined in the next layer.)
wolffd@0 41
wolffd@0 42 <p>The fields in <CODE>net</CODE> are
wolffd@0 43 <PRE>
wolffd@0 44
wolffd@0 45 type = 'rbf'
wolffd@0 46 nin = number of inputs
wolffd@0 47 nhidden = number of hidden units
wolffd@0 48 nout = number of outputs
wolffd@0 49 nwts = total number of weights and biases
wolffd@0 50 actfn = string defining hidden unit activation function:
wolffd@0 51 'gaussian' for a radially symmetric Gaussian function.
wolffd@0 52 'tps' for r^2 log r, the thin plate spline function.
wolffd@0 53 'r4logr' for r^4 log r.
wolffd@0 54 outfn = string defining output error function:
wolffd@0 55 'linear' for linear outputs (default) and SoS error.
wolffd@0 56 'neuroscale' for Sammon stress measure.
wolffd@0 57 c = centres
wolffd@0 58 wi = squared widths (null for rlogr and tps)
wolffd@0 59 w2 = second layer weight matrix
wolffd@0 60 b2 = second layer bias vector
wolffd@0 61 </PRE>
wolffd@0 62
wolffd@0 63
wolffd@0 64 <p><CODE>net = rbf(nin, nhidden, nout, rbfund, outfunc)</CODE> allows the user to
wolffd@0 65 specify the type of error function to be used. The field <CODE>outfn</CODE>
wolffd@0 66 is set to the value of this string. Linear outputs (for regression problems)
wolffd@0 67 and Neuroscale outputs (for topographic mappings) are supported.
wolffd@0 68
wolffd@0 69 <p><CODE>net = rbf(nin, nhidden, nout, rbfunc, outfunc, prior, beta)</CODE>,
wolffd@0 70 in which <CODE>prior</CODE> is
wolffd@0 71 a scalar, allows the field <CODE>net.alpha</CODE> in the data structure
wolffd@0 72 <CODE>net</CODE> to be set, corresponding to a zero-mean isotropic Gaussian
wolffd@0 73 prior with inverse variance with value <CODE>prior</CODE>. Alternatively,
wolffd@0 74 <CODE>prior</CODE> can consist of a data structure with fields <CODE>alpha</CODE>
wolffd@0 75 and <CODE>index</CODE>, allowing individual Gaussian priors to be set over
wolffd@0 76 groups of weights in the network. Here <CODE>alpha</CODE> is a column vector
wolffd@0 77 in which each element corresponds to a separate group of weights,
wolffd@0 78 which need not be mutually exclusive. The membership of the groups is
wolffd@0 79 defined by the matrix <CODE>indx</CODE> in which the columns correspond to
wolffd@0 80 the elements of <CODE>alpha</CODE>. Each column has one element for each
wolffd@0 81 weight in the matrix, in the order defined by the function
wolffd@0 82 <CODE>rbfpak</CODE>, and each element is 1 or 0 according to whether the
wolffd@0 83 weight is a member of the corresponding group or not. A utility
wolffd@0 84 function <CODE>rbfprior</CODE> is provided to help in setting up the
wolffd@0 85 <CODE>prior</CODE> data structure.
wolffd@0 86
wolffd@0 87 <p><CODE>net = rbf(nin, nhidden, nout, func, prior, beta)</CODE> also sets the
wolffd@0 88 additional field <CODE>net.beta</CODE> in the data structure <CODE>net</CODE>, where
wolffd@0 89 beta corresponds to the inverse noise variance.
wolffd@0 90
wolffd@0 91 <p><h2>
wolffd@0 92 Example
wolffd@0 93 </h2>
wolffd@0 94 The following code constructs an RBF network with 1 input and output node
wolffd@0 95 and 5 hidden nodes and then propagates some data <CODE>x</CODE> through it.
wolffd@0 96 <PRE>
wolffd@0 97
wolffd@0 98 net = rbf(1, 5, 1, 'tps');
wolffd@0 99 [y, act] = rbffwd(net, x);
wolffd@0 100 </PRE>
wolffd@0 101
wolffd@0 102
wolffd@0 103 <p><h2>
wolffd@0 104 See Also
wolffd@0 105 </h2>
wolffd@0 106 <CODE><a href="rbferr.htm">rbferr</a></CODE>, <CODE><a href="rbffwd.htm">rbffwd</a></CODE>, <CODE><a href="rbfgrad.htm">rbfgrad</a></CODE>, <CODE><a href="rbfpak.htm">rbfpak</a></CODE>, <CODE><a href="rbftrain.htm">rbftrain</a></CODE>, <CODE><a href="rbfunpak.htm">rbfunpak</a></CODE><hr>
wolffd@0 107 <b>Pages:</b>
wolffd@0 108 <a href="index.htm">Index</a>
wolffd@0 109 <hr>
wolffd@0 110 <p>Copyright (c) Ian T Nabney (1996-9)
wolffd@0 111
wolffd@0 112
wolffd@0 113 </body>
wolffd@0 114 </html>