wolffd@0: wolffd@0:
wolffd@0:wolffd@0: h = glmhess(net, x, t) wolffd@0: [h, hdata] = glmhess(net, x, t) wolffd@0: h = glmhess(net, x, t, hdata) wolffd@0:wolffd@0: wolffd@0: wolffd@0:
h = glmhess(net, x, t) takes a GLM network data structure net,  
wolffd@0: a matrix x of input values, and a matrix t of target
wolffd@0: values and returns the full Hessian matrix h corresponding to
wolffd@0: the second derivatives of the negative log posterior distribution,
wolffd@0: evaluated for the current weight and bias values as defined by
wolffd@0: net. Note that the target data is not required in the calculation,
wolffd@0: but is included to make the interface uniform with nethess.  For
wolffd@0: linear and logistic outputs, the computation is very simple and is 
wolffd@0: done (in effect) in one line in glmtrain.
wolffd@0: 
wolffd@0: [h, hdata] = glmhess(net, x, t) returns both the Hessian matrix
wolffd@0: h and the contribution hdata arising from the data dependent
wolffd@0: term in the Hessian.
wolffd@0: 
wolffd@0: 
h = glmhess(net, x, t, hdata) takes a network data structure
wolffd@0: net, a matrix x of input values, and a matrix t of 
wolffd@0: target values, together with the contribution hdata arising from
wolffd@0: the data dependent term in the Hessian, and returns the full Hessian
wolffd@0: matrix h corresponding to the second derivatives of the negative
wolffd@0: log posterior distribution. This version saves computation time if
wolffd@0: hdata has already been evaluated for the current weight and bias
wolffd@0: values.
wolffd@0: 
wolffd@0: 
glmtrain to take a Newton step for
wolffd@0: softmax outputs.
wolffd@0: wolffd@0: wolffd@0: Hessian = glmhess(net, x, t); wolffd@0: deltaw = -gradient*pinv(Hessian); wolffd@0:wolffd@0: wolffd@0: wolffd@0:
glm, glmtrain, hesschek, nethessCopyright (c) Ian T Nabney (1996-9) wolffd@0: wolffd@0: wolffd@0: wolffd@0: