wolffd@0: wolffd@0:
wolffd@0:wolffd@0: g = mlpbkp(net, x, z, deltas)wolffd@0: wolffd@0: wolffd@0:
g = mlpbkp(net, x, z, deltas)
takes a network data structure
wolffd@0: net
together with a matrix x
of input vectors, a matrix
wolffd@0: z
of hidden unit activations, and a matrix deltas
of the
wolffd@0: gradient of the error function with respect to the values of the
wolffd@0: output units (i.e. the summed inputs to the output units, before the
wolffd@0: activation function is applied). The return value is the gradient
wolffd@0: g
of the error function with respect to the network
wolffd@0: weights. Each row of x
corresponds to one input vector.
wolffd@0:
wolffd@0: This function is provided so that the common backpropagation algorithm wolffd@0: can be used by multi-layer perceptron network models to compute wolffd@0: gradients for mixture density networks as well as standard error wolffd@0: functions. wolffd@0: wolffd@0:
mlp
, mlpgrad
, mlpderiv
, mdngrad
Copyright (c) Ian T Nabney (1996-9) wolffd@0: wolffd@0: wolffd@0: wolffd@0: