Daniel@0: Daniel@0:
Daniel@0:Daniel@0: h = glmhess(net, x, t) Daniel@0: [h, hdata] = glmhess(net, x, t) Daniel@0: h = glmhess(net, x, t, hdata) Daniel@0:Daniel@0: Daniel@0: Daniel@0:
h = glmhess(net, x, t)
takes a GLM network data structure net
,
Daniel@0: a matrix x
of input values, and a matrix t
of target
Daniel@0: values and returns the full Hessian matrix h
corresponding to
Daniel@0: the second derivatives of the negative log posterior distribution,
Daniel@0: evaluated for the current weight and bias values as defined by
Daniel@0: net
. Note that the target data is not required in the calculation,
Daniel@0: but is included to make the interface uniform with nethess
. For
Daniel@0: linear and logistic outputs, the computation is very simple and is
Daniel@0: done (in effect) in one line in glmtrain
.
Daniel@0:
Daniel@0: [h, hdata] = glmhess(net, x, t)
returns both the Hessian matrix
Daniel@0: h
and the contribution hdata
arising from the data dependent
Daniel@0: term in the Hessian.
Daniel@0:
Daniel@0:
h = glmhess(net, x, t, hdata)
takes a network data structure
Daniel@0: net
, a matrix x
of input values, and a matrix t
of
Daniel@0: target values, together with the contribution hdata
arising from
Daniel@0: the data dependent term in the Hessian, and returns the full Hessian
Daniel@0: matrix h
corresponding to the second derivatives of the negative
Daniel@0: log posterior distribution. This version saves computation time if
Daniel@0: hdata
has already been evaluated for the current weight and bias
Daniel@0: values.
Daniel@0:
Daniel@0:
glmtrain
to take a Newton step for
Daniel@0: softmax outputs.
Daniel@0: Daniel@0: Daniel@0: Hessian = glmhess(net, x, t); Daniel@0: deltaw = -gradient*pinv(Hessian); Daniel@0:Daniel@0: Daniel@0: Daniel@0:
glm
, glmtrain
, hesschek
, nethess
Copyright (c) Ian T Nabney (1996-9) Daniel@0: Daniel@0: Daniel@0: Daniel@0: