Daniel@0: Daniel@0:
Daniel@0:Daniel@0: a = rbffwd(net, x) Daniel@0: function [a, z, n2] = rbffwd(net, x) Daniel@0:Daniel@0: Daniel@0: Daniel@0:
a = rbffwd(net, x)
takes a network data structure
Daniel@0: net
and a matrix x
of input
Daniel@0: vectors and forward propagates the inputs through the network to generate
Daniel@0: a matrix a
of output vectors. Each row of x
corresponds to one
Daniel@0: input vector and each row of a
contains the corresponding output vector.
Daniel@0: The activation function that is used is determined by net.actfn
.
Daniel@0:
Daniel@0: [a, z, n2] = rbffwd(net, x)
also generates a matrix z
of
Daniel@0: the hidden unit activations where each row corresponds to one pattern.
Daniel@0: These hidden unit activations represent the design matrix
for
Daniel@0: the RBF. The matrix n2
is the squared distances between each
Daniel@0: basis function centre and each pattern in which each row corresponds
Daniel@0: to a data point.
Daniel@0:
Daniel@0:
Daniel@0: Daniel@0: [a, z] = rbffwd(net, x); Daniel@0: Daniel@0:Daniel@0: Daniel@0: Heretemp = pinv([z ones(size(x, 1), 1)]) * t; Daniel@0: net.w2 = temp(1: nd(2), :); Daniel@0: net.b2 = temp(size(x, nd(2)) + 1, :); Daniel@0:
x
is the input data, t
are the target values, and we use the
Daniel@0: pseudo-inverse to find the output weights and biases.
Daniel@0:
Daniel@0: rbf
, rbferr
, rbfgrad
, rbfpak
, rbftrain
, rbfunpak
Copyright (c) Ian T Nabney (1996-9) Daniel@0: Daniel@0: Daniel@0: Daniel@0: