Daniel@0: Daniel@0: Daniel@0: Daniel@0: Netlab Reference Manual rbftrain Daniel@0: Daniel@0: Daniel@0: Daniel@0:

rbftrain Daniel@0:

Daniel@0:

Daniel@0: Purpose Daniel@0:

Daniel@0: Two stage training of RBF network. Daniel@0: Daniel@0:

Daniel@0: Description Daniel@0:

Daniel@0: net = rbftrain(net, options, x, t) uses a Daniel@0: two stage training Daniel@0: algorithm to set the weights in the RBF model structure net. Daniel@0: Each row of x corresponds to one Daniel@0: input vector and each row of t contains the corresponding target vector. Daniel@0: The centres are determined by fitting a Gaussian mixture model Daniel@0: with circular covariances using the EM algorithm through a call to Daniel@0: rbfsetbf. (The mixture model is Daniel@0: initialised using a small number of iterations of the K-means algorithm.) Daniel@0: If the activation functions are Gaussians, then the basis function widths Daniel@0: are then set to the maximum inter-centre squared distance. Daniel@0: Daniel@0:

For linear outputs, Daniel@0: the hidden to output Daniel@0: weights that give rise to the least squares solution Daniel@0: can then be determined using the pseudo-inverse. For neuroscale outputs, Daniel@0: the hidden to output weights are determined using the iterative shadow Daniel@0: targets algorithm. Daniel@0: Although this two stage Daniel@0: procedure may not give solutions with as low an error as using general Daniel@0: purpose non-linear optimisers, it is much faster. Daniel@0: Daniel@0:

The options vector may have two rows: if this is the case, then the second row Daniel@0: is passed to rbfsetbf, which allows the user to specify a different Daniel@0: number iterations for RBF and GMM training. Daniel@0: The optional parameters to rbftrain have the following interpretations. Daniel@0: Daniel@0:

options(1) is set to 1 to display error values during EM training. Daniel@0: Daniel@0:

options(2) is a measure of the precision required for the value Daniel@0: of the weights w at the solution. Daniel@0: Daniel@0:

options(3) is a measure of the precision required of the objective Daniel@0: function at the solution. Both this and the previous condition must be Daniel@0: satisfied for termination. Daniel@0: Daniel@0:

options(5) is set to 1 if the basis functions parameters should remain Daniel@0: unchanged; default 0. Daniel@0: Daniel@0:

options(6) is set to 1 if the output layer weights should be should Daniel@0: set using PCA. This is only relevant for Neuroscale outputs; default 0. Daniel@0: Daniel@0:

options(14) is the maximum number of iterations for the shadow Daniel@0: targets algorithm; Daniel@0: default 100. Daniel@0: Daniel@0:

Daniel@0: Example Daniel@0:

Daniel@0: The following example creates an RBF network and then trains it: Daniel@0:
Daniel@0: 
Daniel@0: net = rbf(1, 4, 1, 'gaussian');
Daniel@0: options(1, :) = foptions;
Daniel@0: options(2, :) = foptions;
Daniel@0: options(2, 14) = 10;  % 10 iterations of EM
Daniel@0: options(2, 5)  = 1;   % Check for covariance collapse in EM
Daniel@0: net = rbftrain(net, options, x, t);
Daniel@0: 
Daniel@0: Daniel@0: Daniel@0:

Daniel@0: See Also Daniel@0:

Daniel@0: rbf, rbferr, rbffwd, rbfgrad, rbfpak, rbfunpak, rbfsetbf
Daniel@0: Pages: Daniel@0: Index Daniel@0:
Daniel@0:

Copyright (c) Ian T Nabney (1996-9) Daniel@0: Daniel@0: Daniel@0: Daniel@0: