wolffd@0: wolffd@0:
wolffd@0:wolffd@0: wolffd@0: g = glmgrad(net, x, t) wolffd@0: [g, gdata, gprior] = glmgrad(net, x, t) wolffd@0:wolffd@0: wolffd@0: wolffd@0:
g = glmgrad(net, x, t) takes a generalized linear model
wolffd@0: data structure net
wolffd@0: together with a matrix x of input vectors and a matrix t
wolffd@0: of target vectors, and evaluates the gradient g of the error
wolffd@0: function with respect to the network weights. The error function
wolffd@0: corresponds to the choice of output unit activation function. Each row
wolffd@0: of x corresponds to one input vector and each row of t
wolffd@0: corresponds to one target vector.
wolffd@0:
wolffd@0: [g, gdata, gprior] = glmgrad(net, x, t) also returns separately
wolffd@0: the data and prior contributions to the gradient.
wolffd@0:
wolffd@0:
glm, glmpak, glmunpak, glmfwd, glmerr, glmtrainCopyright (c) Ian T Nabney (1996-9) wolffd@0: wolffd@0: wolffd@0: wolffd@0: