wolffd@0: wolffd@0: wolffd@0: wolffd@0: Netlab Reference Manual rbftrain wolffd@0: wolffd@0: wolffd@0: wolffd@0:

rbftrain wolffd@0:

wolffd@0:

wolffd@0: Purpose wolffd@0:

wolffd@0: Two stage training of RBF network. wolffd@0: wolffd@0:

wolffd@0: Description wolffd@0:

wolffd@0: net = rbftrain(net, options, x, t) uses a wolffd@0: two stage training wolffd@0: algorithm to set the weights in the RBF model structure net. wolffd@0: Each row of x corresponds to one wolffd@0: input vector and each row of t contains the corresponding target vector. wolffd@0: The centres are determined by fitting a Gaussian mixture model wolffd@0: with circular covariances using the EM algorithm through a call to wolffd@0: rbfsetbf. (The mixture model is wolffd@0: initialised using a small number of iterations of the K-means algorithm.) wolffd@0: If the activation functions are Gaussians, then the basis function widths wolffd@0: are then set to the maximum inter-centre squared distance. wolffd@0: wolffd@0:

For linear outputs, wolffd@0: the hidden to output wolffd@0: weights that give rise to the least squares solution wolffd@0: can then be determined using the pseudo-inverse. For neuroscale outputs, wolffd@0: the hidden to output weights are determined using the iterative shadow wolffd@0: targets algorithm. wolffd@0: Although this two stage wolffd@0: procedure may not give solutions with as low an error as using general wolffd@0: purpose non-linear optimisers, it is much faster. wolffd@0: wolffd@0:

The options vector may have two rows: if this is the case, then the second row wolffd@0: is passed to rbfsetbf, which allows the user to specify a different wolffd@0: number iterations for RBF and GMM training. wolffd@0: The optional parameters to rbftrain have the following interpretations. wolffd@0: wolffd@0:

options(1) is set to 1 to display error values during EM training. wolffd@0: wolffd@0:

options(2) is a measure of the precision required for the value wolffd@0: of the weights w at the solution. wolffd@0: wolffd@0:

options(3) is a measure of the precision required of the objective wolffd@0: function at the solution. Both this and the previous condition must be wolffd@0: satisfied for termination. wolffd@0: wolffd@0:

options(5) is set to 1 if the basis functions parameters should remain wolffd@0: unchanged; default 0. wolffd@0: wolffd@0:

options(6) is set to 1 if the output layer weights should be should wolffd@0: set using PCA. This is only relevant for Neuroscale outputs; default 0. wolffd@0: wolffd@0:

options(14) is the maximum number of iterations for the shadow wolffd@0: targets algorithm; wolffd@0: default 100. wolffd@0: wolffd@0:

wolffd@0: Example wolffd@0:

wolffd@0: The following example creates an RBF network and then trains it: wolffd@0:
wolffd@0: 
wolffd@0: net = rbf(1, 4, 1, 'gaussian');
wolffd@0: options(1, :) = foptions;
wolffd@0: options(2, :) = foptions;
wolffd@0: options(2, 14) = 10;  % 10 iterations of EM
wolffd@0: options(2, 5)  = 1;   % Check for covariance collapse in EM
wolffd@0: net = rbftrain(net, options, x, t);
wolffd@0: 
wolffd@0: wolffd@0: wolffd@0:

wolffd@0: See Also wolffd@0:

wolffd@0: rbf, rbferr, rbffwd, rbfgrad, rbfpak, rbfunpak, rbfsetbf
wolffd@0: Pages: wolffd@0: Index wolffd@0:
wolffd@0:

Copyright (c) Ian T Nabney (1996-9) wolffd@0: wolffd@0: wolffd@0: wolffd@0: