Daniel@0: Daniel@0: Daniel@0: Daniel@0: Netlab Reference Manual rbffwd Daniel@0: Daniel@0: Daniel@0: Daniel@0:

rbffwd Daniel@0:

Daniel@0:

Daniel@0: Purpose Daniel@0:

Daniel@0: Forward propagation through RBF network with linear outputs. Daniel@0: Daniel@0:

Daniel@0: Synopsis Daniel@0:

Daniel@0:
Daniel@0: a = rbffwd(net, x)
Daniel@0: function [a, z, n2] = rbffwd(net, x)
Daniel@0: 
Daniel@0: Daniel@0: Daniel@0:

Daniel@0: Description Daniel@0:

Daniel@0: a = rbffwd(net, x) takes a network data structure Daniel@0: net and a matrix x of input Daniel@0: vectors and forward propagates the inputs through the network to generate Daniel@0: a matrix a of output vectors. Each row of x corresponds to one Daniel@0: input vector and each row of a contains the corresponding output vector. Daniel@0: The activation function that is used is determined by net.actfn. Daniel@0: Daniel@0:

[a, z, n2] = rbffwd(net, x) also generates a matrix z of Daniel@0: the hidden unit activations where each row corresponds to one pattern. Daniel@0: These hidden unit activations represent the design matrix for Daniel@0: the RBF. The matrix n2 is the squared distances between each Daniel@0: basis function centre and each pattern in which each row corresponds Daniel@0: to a data point. Daniel@0: Daniel@0:

Daniel@0: Examples Daniel@0:

Daniel@0:
Daniel@0: 
Daniel@0: [a, z] = rbffwd(net, x);
Daniel@0: 
Daniel@0: 

temp = pinv([z ones(size(x, 1), 1)]) * t; Daniel@0: net.w2 = temp(1: nd(2), :); Daniel@0: net.b2 = temp(size(x, nd(2)) + 1, :); Daniel@0:

Daniel@0: Daniel@0: Here x is the input data, t are the target values, and we use the Daniel@0: pseudo-inverse to find the output weights and biases. Daniel@0: Daniel@0:

Daniel@0: See Also Daniel@0:

Daniel@0: rbf, rbferr, rbfgrad, rbfpak, rbftrain, rbfunpak
Daniel@0: Pages: Daniel@0: Index Daniel@0:
Daniel@0:

Copyright (c) Ian T Nabney (1996-9) Daniel@0: Daniel@0: Daniel@0: Daniel@0: