Mercurial > hg > camir-aes2014
comparison toolboxes/FullBNT-1.0.7/nethelp3.3/rbf.htm @ 0:e9a9cd732c1e tip
first hg version after svn
author | wolffd |
---|---|
date | Tue, 10 Feb 2015 15:05:51 +0000 |
parents | |
children |
comparison
equal
deleted
inserted
replaced
-1:000000000000 | 0:e9a9cd732c1e |
---|---|
1 <html> | |
2 <head> | |
3 <title> | |
4 Netlab Reference Manual rbf | |
5 </title> | |
6 </head> | |
7 <body> | |
8 <H1> rbf | |
9 </H1> | |
10 <h2> | |
11 Purpose | |
12 </h2> | |
13 Creates an RBF network with specified architecture | |
14 | |
15 <p><h2> | |
16 Synopsis | |
17 </h2> | |
18 <PRE> | |
19 | |
20 net = rbf(nin, nhidden, nout, rbfunc) | |
21 net = rbf(nin, nhidden, nout, rbfunc, outfunc) | |
22 net = rbf(nin, nhidden, nout, rbfunc, outfunc, prior, beta) | |
23 </PRE> | |
24 | |
25 | |
26 <p><h2> | |
27 Description | |
28 </h2> | |
29 <CODE>net = rbf(nin, nhidden, nout, rbfunc)</CODE> constructs and initialises | |
30 a radial basis function network returning a data structure <CODE>net</CODE>. | |
31 The weights are all initialised with a zero mean, unit variance normal | |
32 distribution, with the exception of the variances, which are set to one. | |
33 This makes use of the Matlab function | |
34 <CODE>randn</CODE> and so the seed for the random weight initialization can be | |
35 set using <CODE>randn('state', s)</CODE> where <CODE>s</CODE> is the seed value. The | |
36 activation functions are defined in terms of the distance between | |
37 the data point and the corresponding centre. Note that the functions are | |
38 computed to a convenient constant multiple: for example, the Gaussian | |
39 is not normalised. (Normalisation is not needed as the function outputs | |
40 are linearly combined in the next layer.) | |
41 | |
42 <p>The fields in <CODE>net</CODE> are | |
43 <PRE> | |
44 | |
45 type = 'rbf' | |
46 nin = number of inputs | |
47 nhidden = number of hidden units | |
48 nout = number of outputs | |
49 nwts = total number of weights and biases | |
50 actfn = string defining hidden unit activation function: | |
51 'gaussian' for a radially symmetric Gaussian function. | |
52 'tps' for r^2 log r, the thin plate spline function. | |
53 'r4logr' for r^4 log r. | |
54 outfn = string defining output error function: | |
55 'linear' for linear outputs (default) and SoS error. | |
56 'neuroscale' for Sammon stress measure. | |
57 c = centres | |
58 wi = squared widths (null for rlogr and tps) | |
59 w2 = second layer weight matrix | |
60 b2 = second layer bias vector | |
61 </PRE> | |
62 | |
63 | |
64 <p><CODE>net = rbf(nin, nhidden, nout, rbfund, outfunc)</CODE> allows the user to | |
65 specify the type of error function to be used. The field <CODE>outfn</CODE> | |
66 is set to the value of this string. Linear outputs (for regression problems) | |
67 and Neuroscale outputs (for topographic mappings) are supported. | |
68 | |
69 <p><CODE>net = rbf(nin, nhidden, nout, rbfunc, outfunc, prior, beta)</CODE>, | |
70 in which <CODE>prior</CODE> is | |
71 a scalar, allows the field <CODE>net.alpha</CODE> in the data structure | |
72 <CODE>net</CODE> to be set, corresponding to a zero-mean isotropic Gaussian | |
73 prior with inverse variance with value <CODE>prior</CODE>. Alternatively, | |
74 <CODE>prior</CODE> can consist of a data structure with fields <CODE>alpha</CODE> | |
75 and <CODE>index</CODE>, allowing individual Gaussian priors to be set over | |
76 groups of weights in the network. Here <CODE>alpha</CODE> is a column vector | |
77 in which each element corresponds to a separate group of weights, | |
78 which need not be mutually exclusive. The membership of the groups is | |
79 defined by the matrix <CODE>indx</CODE> in which the columns correspond to | |
80 the elements of <CODE>alpha</CODE>. Each column has one element for each | |
81 weight in the matrix, in the order defined by the function | |
82 <CODE>rbfpak</CODE>, and each element is 1 or 0 according to whether the | |
83 weight is a member of the corresponding group or not. A utility | |
84 function <CODE>rbfprior</CODE> is provided to help in setting up the | |
85 <CODE>prior</CODE> data structure. | |
86 | |
87 <p><CODE>net = rbf(nin, nhidden, nout, func, prior, beta)</CODE> also sets the | |
88 additional field <CODE>net.beta</CODE> in the data structure <CODE>net</CODE>, where | |
89 beta corresponds to the inverse noise variance. | |
90 | |
91 <p><h2> | |
92 Example | |
93 </h2> | |
94 The following code constructs an RBF network with 1 input and output node | |
95 and 5 hidden nodes and then propagates some data <CODE>x</CODE> through it. | |
96 <PRE> | |
97 | |
98 net = rbf(1, 5, 1, 'tps'); | |
99 [y, act] = rbffwd(net, x); | |
100 </PRE> | |
101 | |
102 | |
103 <p><h2> | |
104 See Also | |
105 </h2> | |
106 <CODE><a href="rbferr.htm">rbferr</a></CODE>, <CODE><a href="rbffwd.htm">rbffwd</a></CODE>, <CODE><a href="rbfgrad.htm">rbfgrad</a></CODE>, <CODE><a href="rbfpak.htm">rbfpak</a></CODE>, <CODE><a href="rbftrain.htm">rbftrain</a></CODE>, <CODE><a href="rbfunpak.htm">rbfunpak</a></CODE><hr> | |
107 <b>Pages:</b> | |
108 <a href="index.htm">Index</a> | |
109 <hr> | |
110 <p>Copyright (c) Ian T Nabney (1996-9) | |
111 | |
112 | |
113 </body> | |
114 </html> |