Mercurial > hg > camir-aes2014
comparison toolboxes/FullBNT-1.0.7/nethelp3.3/rbftrain.htm @ 0:e9a9cd732c1e tip
first hg version after svn
author | wolffd |
---|---|
date | Tue, 10 Feb 2015 15:05:51 +0000 |
parents | |
children |
comparison
equal
deleted
inserted
replaced
-1:000000000000 | 0:e9a9cd732c1e |
---|---|
1 <html> | |
2 <head> | |
3 <title> | |
4 Netlab Reference Manual rbftrain | |
5 </title> | |
6 </head> | |
7 <body> | |
8 <H1> rbftrain | |
9 </H1> | |
10 <h2> | |
11 Purpose | |
12 </h2> | |
13 Two stage training of RBF network. | |
14 | |
15 <p><h2> | |
16 Description | |
17 </h2> | |
18 <CODE>net = rbftrain(net, options, x, t)</CODE> uses a | |
19 two stage training | |
20 algorithm to set the weights in the RBF model structure <CODE>net</CODE>. | |
21 Each row of <CODE>x</CODE> corresponds to one | |
22 input vector and each row of <CODE>t</CODE> contains the corresponding target vector. | |
23 The centres are determined by fitting a Gaussian mixture model | |
24 with circular covariances using the EM algorithm through a call to | |
25 <CODE>rbfsetbf</CODE>. (The mixture model is | |
26 initialised using a small number of iterations of the K-means algorithm.) | |
27 If the activation functions are Gaussians, then the basis function widths | |
28 are then set to the maximum inter-centre squared distance. | |
29 | |
30 <p>For linear outputs, | |
31 the hidden to output | |
32 weights that give rise to the least squares solution | |
33 can then be determined using the pseudo-inverse. For neuroscale outputs, | |
34 the hidden to output weights are determined using the iterative shadow | |
35 targets algorithm. | |
36 Although this two stage | |
37 procedure may not give solutions with as low an error as using general | |
38 purpose non-linear optimisers, it is much faster. | |
39 | |
40 <p>The options vector may have two rows: if this is the case, then the second row | |
41 is passed to <CODE>rbfsetbf</CODE>, which allows the user to specify a different | |
42 number iterations for RBF and GMM training. | |
43 The optional parameters to <CODE>rbftrain</CODE> have the following interpretations. | |
44 | |
45 <p><CODE>options(1)</CODE> is set to 1 to display error values during EM training. | |
46 | |
47 <p><CODE>options(2)</CODE> is a measure of the precision required for the value | |
48 of the weights <CODE>w</CODE> at the solution. | |
49 | |
50 <p><CODE>options(3)</CODE> is a measure of the precision required of the objective | |
51 function at the solution. Both this and the previous condition must be | |
52 satisfied for termination. | |
53 | |
54 <p><CODE>options(5)</CODE> is set to 1 if the basis functions parameters should remain | |
55 unchanged; default 0. | |
56 | |
57 <p><CODE>options(6)</CODE> is set to 1 if the output layer weights should be should | |
58 set using PCA. This is only relevant for Neuroscale outputs; default 0. | |
59 | |
60 <p><CODE>options(14)</CODE> is the maximum number of iterations for the shadow | |
61 targets algorithm; | |
62 default 100. | |
63 | |
64 <p><h2> | |
65 Example | |
66 </h2> | |
67 The following example creates an RBF network and then trains it: | |
68 <PRE> | |
69 | |
70 net = rbf(1, 4, 1, 'gaussian'); | |
71 options(1, :) = foptions; | |
72 options(2, :) = foptions; | |
73 options(2, 14) = 10; % 10 iterations of EM | |
74 options(2, 5) = 1; % Check for covariance collapse in EM | |
75 net = rbftrain(net, options, x, t); | |
76 </PRE> | |
77 | |
78 | |
79 <p><h2> | |
80 See Also | |
81 </h2> | |
82 <CODE><a href="rbf.htm">rbf</a></CODE>, <CODE><a href="rbferr.htm">rbferr</a></CODE>, <CODE><a href="rbffwd.htm">rbffwd</a></CODE>, <CODE><a href="rbfgrad.htm">rbfgrad</a></CODE>, <CODE><a href="rbfpak.htm">rbfpak</a></CODE>, <CODE><a href="rbfunpak.htm">rbfunpak</a></CODE>, <CODE><a href="rbfsetbf.htm">rbfsetbf</a></CODE><hr> | |
83 <b>Pages:</b> | |
84 <a href="index.htm">Index</a> | |
85 <hr> | |
86 <p>Copyright (c) Ian T Nabney (1996-9) | |
87 | |
88 | |
89 </body> | |
90 </html> |