annotate toolboxes/FullBNT-1.0.7/nethelp3.3/mlphess.htm @ 0:cc4b1211e677 tip

initial commit to HG from Changeset: 646 (e263d8a21543) added further path and more save "camirversion.m"
author Daniel Wolff
date Fri, 19 Aug 2016 13:07:06 +0200
parents
children
rev   line source
Daniel@0 1 <html>
Daniel@0 2 <head>
Daniel@0 3 <title>
Daniel@0 4 Netlab Reference Manual mlphess
Daniel@0 5 </title>
Daniel@0 6 </head>
Daniel@0 7 <body>
Daniel@0 8 <H1> mlphess
Daniel@0 9 </H1>
Daniel@0 10 <h2>
Daniel@0 11 Purpose
Daniel@0 12 </h2>
Daniel@0 13 Evaluate the Hessian matrix for a multi-layer perceptron network.
Daniel@0 14
Daniel@0 15 <p><h2>
Daniel@0 16 Synopsis
Daniel@0 17 </h2>
Daniel@0 18 <PRE>
Daniel@0 19 h = mlphess(net, x, t)
Daniel@0 20 [h, hdata] = mlphess(net, x, t)
Daniel@0 21 h = mlphess(net, x, t, hdata)
Daniel@0 22 </PRE>
Daniel@0 23
Daniel@0 24
Daniel@0 25 <p><h2>
Daniel@0 26 Description
Daniel@0 27 </h2>
Daniel@0 28 <CODE>h = mlphess(net, x, t)</CODE> takes an MLP network data structure <CODE>net</CODE>,
Daniel@0 29 a matrix <CODE>x</CODE> of input values, and a matrix <CODE>t</CODE> of target
Daniel@0 30 values and returns the full Hessian matrix <CODE>h</CODE> corresponding to
Daniel@0 31 the second derivatives of the negative log posterior distribution,
Daniel@0 32 evaluated for the current weight and bias values as defined by
Daniel@0 33 <CODE>net</CODE>.
Daniel@0 34
Daniel@0 35 <p><CODE>[h, hdata] = mlphess(net, x, t)</CODE> returns both the Hessian matrix
Daniel@0 36 <CODE>h</CODE> and the contribution <CODE>hdata</CODE> arising from the data dependent
Daniel@0 37 term in the Hessian.
Daniel@0 38
Daniel@0 39 <p><CODE>h = mlphess(net, x, t, hdata)</CODE> takes a network data structure
Daniel@0 40 <CODE>net</CODE>, a matrix <CODE>x</CODE> of input values, and a matrix <CODE>t</CODE> of
Daniel@0 41 target values, together with the contribution <CODE>hdata</CODE> arising from
Daniel@0 42 the data dependent term in the Hessian, and returns the full Hessian
Daniel@0 43 matrix <CODE>h</CODE> corresponding to the second derivatives of the negative
Daniel@0 44 log posterior distribution. This version saves computation time if
Daniel@0 45 <CODE>hdata</CODE> has already been evaluated for the current weight and bias
Daniel@0 46 values.
Daniel@0 47
Daniel@0 48 <p><h2>
Daniel@0 49 Example
Daniel@0 50 </h2>
Daniel@0 51 For the standard regression framework with a Gaussian conditional
Daniel@0 52 distribution of target values given input values, and a simple
Daniel@0 53 Gaussian prior over weights, the Hessian takes the form
Daniel@0 54 <PRE>
Daniel@0 55
Daniel@0 56 h = beta*hd + alpha*I
Daniel@0 57 </PRE>
Daniel@0 58
Daniel@0 59 where the contribution <CODE>hd</CODE> is evaluated by calls to <CODE>mlphdotv</CODE> and
Daniel@0 60 <CODE>h</CODE> is the full Hessian.
Daniel@0 61
Daniel@0 62 <p><h2>
Daniel@0 63 See Also
Daniel@0 64 </h2>
Daniel@0 65 <CODE><a href="mlp.htm">mlp</a></CODE>, <CODE><a href="hesschek.htm">hesschek</a></CODE>, <CODE><a href="mlphdotv.htm">mlphdotv</a></CODE>, <CODE><a href="evidence.htm">evidence</a></CODE><hr>
Daniel@0 66 <b>Pages:</b>
Daniel@0 67 <a href="index.htm">Index</a>
Daniel@0 68 <hr>
Daniel@0 69 <p>Copyright (c) Ian T Nabney (1996-9)
Daniel@0 70
Daniel@0 71
Daniel@0 72 </body>
Daniel@0 73 </html>