wolffd@0: wolffd@0: wolffd@0: wolffd@0: Netlab Reference Manual rbffwd wolffd@0: wolffd@0: wolffd@0: wolffd@0:

rbffwd wolffd@0:

wolffd@0:

wolffd@0: Purpose wolffd@0:

wolffd@0: Forward propagation through RBF network with linear outputs. wolffd@0: wolffd@0:

wolffd@0: Synopsis wolffd@0:

wolffd@0:
wolffd@0: a = rbffwd(net, x)
wolffd@0: function [a, z, n2] = rbffwd(net, x)
wolffd@0: 
wolffd@0: wolffd@0: wolffd@0:

wolffd@0: Description wolffd@0:

wolffd@0: a = rbffwd(net, x) takes a network data structure wolffd@0: net and a matrix x of input wolffd@0: vectors and forward propagates the inputs through the network to generate wolffd@0: a matrix a of output vectors. Each row of x corresponds to one wolffd@0: input vector and each row of a contains the corresponding output vector. wolffd@0: The activation function that is used is determined by net.actfn. wolffd@0: wolffd@0:

[a, z, n2] = rbffwd(net, x) also generates a matrix z of wolffd@0: the hidden unit activations where each row corresponds to one pattern. wolffd@0: These hidden unit activations represent the design matrix for wolffd@0: the RBF. The matrix n2 is the squared distances between each wolffd@0: basis function centre and each pattern in which each row corresponds wolffd@0: to a data point. wolffd@0: wolffd@0:

wolffd@0: Examples wolffd@0:

wolffd@0:
wolffd@0: 
wolffd@0: [a, z] = rbffwd(net, x);
wolffd@0: 
wolffd@0: 

temp = pinv([z ones(size(x, 1), 1)]) * t; wolffd@0: net.w2 = temp(1: nd(2), :); wolffd@0: net.b2 = temp(size(x, nd(2)) + 1, :); wolffd@0:

wolffd@0: wolffd@0: Here x is the input data, t are the target values, and we use the wolffd@0: pseudo-inverse to find the output weights and biases. wolffd@0: wolffd@0:

wolffd@0: See Also wolffd@0:

wolffd@0: rbf, rbferr, rbfgrad, rbfpak, rbftrain, rbfunpak
wolffd@0: Pages: wolffd@0: Index wolffd@0:
wolffd@0:

Copyright (c) Ian T Nabney (1996-9) wolffd@0: wolffd@0: wolffd@0: wolffd@0: