Mercurial > hg > camir-aes2014
view toolboxes/FullBNT-1.0.7/nethelp3.3/mlpbkp.htm @ 0:e9a9cd732c1e tip
first hg version after svn
author | wolffd |
---|---|
date | Tue, 10 Feb 2015 15:05:51 +0000 |
parents | |
children |
line wrap: on
line source
<html> <head> <title> Netlab Reference Manual mlpbkp </title> </head> <body> <H1> mlpbkp </H1> <h2> Purpose </h2> Backpropagate gradient of error function for 2-layer network. <p><h2> Synopsis </h2> <PRE> g = mlpbkp(net, x, z, deltas)</PRE> <p><h2> Description </h2> <CODE>g = mlpbkp(net, x, z, deltas)</CODE> takes a network data structure <CODE>net</CODE> together with a matrix <CODE>x</CODE> of input vectors, a matrix <CODE>z</CODE> of hidden unit activations, and a matrix <CODE>deltas</CODE> of the gradient of the error function with respect to the values of the output units (i.e. the summed inputs to the output units, before the activation function is applied). The return value is the gradient <CODE>g</CODE> of the error function with respect to the network weights. Each row of <CODE>x</CODE> corresponds to one input vector. <p>This function is provided so that the common backpropagation algorithm can be used by multi-layer perceptron network models to compute gradients for mixture density networks as well as standard error functions. <p><h2> See Also </h2> <CODE><a href="mlp.htm">mlp</a></CODE>, <CODE><a href="mlpgrad.htm">mlpgrad</a></CODE>, <CODE><a href="mlpderiv.htm">mlpderiv</a></CODE>, <CODE><a href="mdngrad.htm">mdngrad</a></CODE><hr> <b>Pages:</b> <a href="index.htm">Index</a> <hr> <p>Copyright (c) Ian T Nabney (1996-9) </body> </html>