Daniel@0: Daniel@0:
Daniel@0:Daniel@0: Daniel@0: g = rbfgrad(net, x, t) Daniel@0: [g, gdata, gprior] = rbfgrad(net, x, t) Daniel@0:Daniel@0: Daniel@0: Daniel@0:
g = rbfgrad(net, x, t)
takes a network data structure net
Daniel@0: together with a matrix x
of input
Daniel@0: vectors and a matrix t
of target vectors, and evaluates the gradient
Daniel@0: g
of the error function with respect to the network weights (i.e.
Daniel@0: including the hidden unit parameters). The error
Daniel@0: function is sum of squares.
Daniel@0: Each row of x
corresponds to one
Daniel@0: input vector and each row of t
contains the corresponding target vector.
Daniel@0: If the output function is 'neuroscale'
then the gradient is only
Daniel@0: computed for the output layer weights and biases.
Daniel@0:
Daniel@0: [g, gdata, gprior] = rbfgrad(net, x, t)
also returns separately
Daniel@0: the data and prior contributions to the gradient. In the case of
Daniel@0: multiple groups in the prior, gprior
is a matrix with a row
Daniel@0: for each group and a column for each weight parameter.
Daniel@0:
Daniel@0:
rbf
, rbffwd
, rbferr
, rbfpak
, rbfunpak
, rbfbkp
Copyright (c) Ian T Nabney (1996-9) Daniel@0: Daniel@0: Daniel@0: Daniel@0: