annotate toolboxes/FullBNT-1.0.7/netlab3.3/mlpbkp.m @ 0:e9a9cd732c1e tip

first hg version after svn
author wolffd
date Tue, 10 Feb 2015 15:05:51 +0000
parents
children
rev   line source
wolffd@0 1 function g = mlpbkp(net, x, z, deltas)
wolffd@0 2 %MLPBKP Backpropagate gradient of error function for 2-layer network.
wolffd@0 3 %
wolffd@0 4 % Description
wolffd@0 5 % G = MLPBKP(NET, X, Z, DELTAS) takes a network data structure NET
wolffd@0 6 % together with a matrix X of input vectors, a matrix Z of hidden unit
wolffd@0 7 % activations, and a matrix DELTAS of the gradient of the error
wolffd@0 8 % function with respect to the values of the output units (i.e. the
wolffd@0 9 % summed inputs to the output units, before the activation function is
wolffd@0 10 % applied). The return value is the gradient G of the error function
wolffd@0 11 % with respect to the network weights. Each row of X corresponds to one
wolffd@0 12 % input vector.
wolffd@0 13 %
wolffd@0 14 % This function is provided so that the common backpropagation
wolffd@0 15 % algorithm can be used by multi-layer perceptron network models to
wolffd@0 16 % compute gradients for mixture density networks as well as standard
wolffd@0 17 % error functions.
wolffd@0 18 %
wolffd@0 19 % See also
wolffd@0 20 % MLP, MLPGRAD, MLPDERIV, MDNGRAD
wolffd@0 21 %
wolffd@0 22
wolffd@0 23 % Copyright (c) Ian T Nabney (1996-2001)
wolffd@0 24
wolffd@0 25 % Evaluate second-layer gradients.
wolffd@0 26 gw2 = z'*deltas;
wolffd@0 27 gb2 = sum(deltas, 1);
wolffd@0 28
wolffd@0 29 % Now do the backpropagation.
wolffd@0 30 delhid = deltas*net.w2';
wolffd@0 31 delhid = delhid.*(1.0 - z.*z);
wolffd@0 32
wolffd@0 33 % Finally, evaluate the first-layer gradients.
wolffd@0 34 gw1 = x'*delhid;
wolffd@0 35 gb1 = sum(delhid, 1);
wolffd@0 36
wolffd@0 37 g = [gw1(:)', gb1, gw2(:)', gb2];