wolffd@0: wolffd@0: wolffd@0: wolffd@0: Netlab Reference Manual rbfgrad wolffd@0: wolffd@0: wolffd@0: wolffd@0:

rbfgrad wolffd@0:

wolffd@0:

wolffd@0: Purpose wolffd@0:

wolffd@0: Evaluate gradient of error function for RBF network. wolffd@0: wolffd@0:

wolffd@0: Synopsis wolffd@0:

wolffd@0:
wolffd@0: 
wolffd@0: g = rbfgrad(net, x, t)
wolffd@0: [g, gdata, gprior] = rbfgrad(net, x, t)
wolffd@0: 
wolffd@0: wolffd@0: wolffd@0:

wolffd@0: Description wolffd@0:

wolffd@0: g = rbfgrad(net, x, t) takes a network data structure net wolffd@0: together with a matrix x of input wolffd@0: vectors and a matrix t of target vectors, and evaluates the gradient wolffd@0: g of the error function with respect to the network weights (i.e. wolffd@0: including the hidden unit parameters). The error wolffd@0: function is sum of squares. wolffd@0: Each row of x corresponds to one wolffd@0: input vector and each row of t contains the corresponding target vector. wolffd@0: If the output function is 'neuroscale' then the gradient is only wolffd@0: computed for the output layer weights and biases. wolffd@0: wolffd@0:

[g, gdata, gprior] = rbfgrad(net, x, t) also returns separately wolffd@0: the data and prior contributions to the gradient. In the case of wolffd@0: multiple groups in the prior, gprior is a matrix with a row wolffd@0: for each group and a column for each weight parameter. wolffd@0: wolffd@0:

wolffd@0: See Also wolffd@0:

wolffd@0: rbf, rbffwd, rbferr, rbfpak, rbfunpak, rbfbkp
wolffd@0: Pages: wolffd@0: Index wolffd@0:
wolffd@0:

Copyright (c) Ian T Nabney (1996-9) wolffd@0: wolffd@0: wolffd@0: wolffd@0: