Daniel@0: function g = rbfbkp(net, x, z, n2, deltas) Daniel@0: %RBFBKP Backpropagate gradient of error function for RBF network. Daniel@0: % Daniel@0: % Description Daniel@0: % G = RBFBKP(NET, X, Z, N2, DELTAS) takes a network data structure NET Daniel@0: % together with a matrix X of input vectors, a matrix Z of hidden unit Daniel@0: % activations, a matrix N2 of the squared distances between centres and Daniel@0: % inputs, and a matrix DELTAS of the gradient of the error function Daniel@0: % with respect to the values of the output units (i.e. the summed Daniel@0: % inputs to the output units, before the activation function is Daniel@0: % applied). The return value is the gradient G of the error function Daniel@0: % with respect to the network weights. Each row of X corresponds to one Daniel@0: % input vector. Daniel@0: % Daniel@0: % This function is provided so that the common backpropagation Daniel@0: % algorithm can be used by RBF network models to compute gradients for Daniel@0: % the output values (in RBFDERIV) as well as standard error functions. Daniel@0: % Daniel@0: % See also Daniel@0: % RBF, RBFGRAD, RBFDERIV Daniel@0: % Daniel@0: Daniel@0: % Copyright (c) Ian T Nabney (1996-2001) Daniel@0: Daniel@0: % Evaluate second-layer gradients. Daniel@0: gw2 = z'*deltas; Daniel@0: gb2 = sum(deltas); Daniel@0: Daniel@0: % Evaluate hidden unit gradients Daniel@0: delhid = deltas*net.w2'; Daniel@0: Daniel@0: gc = zeros(net.nhidden, net.nin); Daniel@0: ndata = size(x, 1); Daniel@0: t1 = ones(ndata, 1); Daniel@0: t2 = ones(1, net.nin); Daniel@0: % Switch on activation function type Daniel@0: switch net.actfn Daniel@0: Daniel@0: case 'gaussian' % Gaussian Daniel@0: delhid = (delhid.*z); Daniel@0: % A loop seems essential, so do it with the shortest index vector Daniel@0: if (net.nin < net.nhidden) Daniel@0: for i = 1:net.nin Daniel@0: gc(:,i) = (sum(((x(:,i)*ones(1, net.nhidden)) - ... Daniel@0: (ones(ndata, 1)*(net.c(:,i)'))).*delhid, 1)./net.wi)'; Daniel@0: end Daniel@0: else Daniel@0: for i = 1:net.nhidden Daniel@0: gc(i,:) = sum((x - (t1*(net.c(i,:)))./net.wi(i)).*(delhid(:,i)*t2), 1); Daniel@0: end Daniel@0: end Daniel@0: gwi = sum((n2.*delhid)./(2.*(ones(ndata, 1)*(net.wi.^2))), 1); Daniel@0: Daniel@0: case 'tps' % Thin plate spline activation function Daniel@0: delhid = delhid.*(1+log(n2+(n2==0))); Daniel@0: for i = 1:net.nhidden Daniel@0: gc(i,:) = sum(2.*((t1*(net.c(i,:)) - x)).*(delhid(:,i)*t2), 1); Daniel@0: end Daniel@0: % widths are not adjustable in this model Daniel@0: gwi = []; Daniel@0: case 'r4logr' % r^4 log r activation function Daniel@0: delhid = delhid.*(n2.*(1+2.*log(n2+(n2==0)))); Daniel@0: for i = 1:net.nhidden Daniel@0: gc(i,:) = sum(2.*((t1*(net.c(i,:)) - x)).*(delhid(:,i)*t2), 1); Daniel@0: end Daniel@0: % widths are not adjustable in this model Daniel@0: gwi = []; Daniel@0: otherwise Daniel@0: error('Unknown activation function in rbfgrad') Daniel@0: end Daniel@0: Daniel@0: g = [gc(:)', gwi, gw2(:)', gb2];