Daniel@0: Daniel@0: Daniel@0: Daniel@0: Netlab Reference Manual mlpbkp Daniel@0: Daniel@0: Daniel@0: Daniel@0:

mlpbkp Daniel@0:

Daniel@0:

Daniel@0: Purpose Daniel@0:

Daniel@0: Backpropagate gradient of error function for 2-layer network. Daniel@0: Daniel@0:

Daniel@0: Synopsis Daniel@0:

Daniel@0:
Daniel@0: g = mlpbkp(net, x, z, deltas)
Daniel@0: Daniel@0: Daniel@0:

Daniel@0: Description Daniel@0:

Daniel@0: g = mlpbkp(net, x, z, deltas) takes a network data structure Daniel@0: net together with a matrix x of input vectors, a matrix Daniel@0: z of hidden unit activations, and a matrix deltas of the Daniel@0: gradient of the error function with respect to the values of the Daniel@0: output units (i.e. the summed inputs to the output units, before the Daniel@0: activation function is applied). The return value is the gradient Daniel@0: g of the error function with respect to the network Daniel@0: weights. Each row of x corresponds to one input vector. Daniel@0: Daniel@0:

This function is provided so that the common backpropagation algorithm Daniel@0: can be used by multi-layer perceptron network models to compute Daniel@0: gradients for mixture density networks as well as standard error Daniel@0: functions. Daniel@0: Daniel@0:

Daniel@0: See Also Daniel@0:

Daniel@0: mlp, mlpgrad, mlpderiv, mdngrad
Daniel@0: Pages: Daniel@0: Index Daniel@0:
Daniel@0:

Copyright (c) Ian T Nabney (1996-9) Daniel@0: Daniel@0: Daniel@0: Daniel@0: