Daniel@0: Daniel@0: Daniel@0: Daniel@0: Netlab Reference Manual rbfgrad Daniel@0: Daniel@0: Daniel@0: Daniel@0:

rbfgrad Daniel@0:

Daniel@0:

Daniel@0: Purpose Daniel@0:

Daniel@0: Evaluate gradient of error function for RBF network. Daniel@0: Daniel@0:

Daniel@0: Synopsis Daniel@0:

Daniel@0:
Daniel@0: 
Daniel@0: g = rbfgrad(net, x, t)
Daniel@0: [g, gdata, gprior] = rbfgrad(net, x, t)
Daniel@0: 
Daniel@0: Daniel@0: Daniel@0:

Daniel@0: Description Daniel@0:

Daniel@0: g = rbfgrad(net, x, t) takes a network data structure net Daniel@0: together with a matrix x of input Daniel@0: vectors and a matrix t of target vectors, and evaluates the gradient Daniel@0: g of the error function with respect to the network weights (i.e. Daniel@0: including the hidden unit parameters). The error Daniel@0: function is sum of squares. Daniel@0: Each row of x corresponds to one Daniel@0: input vector and each row of t contains the corresponding target vector. Daniel@0: If the output function is 'neuroscale' then the gradient is only Daniel@0: computed for the output layer weights and biases. Daniel@0: Daniel@0:

[g, gdata, gprior] = rbfgrad(net, x, t) also returns separately Daniel@0: the data and prior contributions to the gradient. In the case of Daniel@0: multiple groups in the prior, gprior is a matrix with a row Daniel@0: for each group and a column for each weight parameter. Daniel@0: Daniel@0:

Daniel@0: See Also Daniel@0:

Daniel@0: rbf, rbffwd, rbferr, rbfpak, rbfunpak, rbfbkp
Daniel@0: Pages: Daniel@0: Index Daniel@0:
Daniel@0:

Copyright (c) Ian T Nabney (1996-9) Daniel@0: Daniel@0: Daniel@0: Daniel@0: