Daniel@0: function [g, gdata, gprior] = rbfgrad(net, x, t) Daniel@0: %RBFGRAD Evaluate gradient of error function for RBF network. Daniel@0: % Daniel@0: % Description Daniel@0: % G = RBFGRAD(NET, X, T) takes a network data structure NET together Daniel@0: % with a matrix X of input vectors and a matrix T of target vectors, Daniel@0: % and evaluates the gradient G of the error function with respect to Daniel@0: % the network weights (i.e. including the hidden unit parameters). The Daniel@0: % error function is sum of squares. Each row of X corresponds to one Daniel@0: % input vector and each row of T contains the corresponding target Daniel@0: % vector. If the output function is 'NEUROSCALE' then the gradient is Daniel@0: % only computed for the output layer weights and biases. Daniel@0: % Daniel@0: % [G, GDATA, GPRIOR] = RBFGRAD(NET, X, T) also returns separately the Daniel@0: % data and prior contributions to the gradient. In the case of multiple Daniel@0: % groups in the prior, GPRIOR is a matrix with a row for each group and Daniel@0: % a column for each weight parameter. Daniel@0: % Daniel@0: % See also Daniel@0: % RBF, RBFFWD, RBFERR, RBFPAK, RBFUNPAK, RBFBKP Daniel@0: % Daniel@0: Daniel@0: % Copyright (c) Ian T Nabney (1996-2001) Daniel@0: Daniel@0: % Check arguments for consistency Daniel@0: switch net.outfn Daniel@0: case 'linear' Daniel@0: errstring = consist(net, 'rbf', x, t); Daniel@0: case 'neuroscale' Daniel@0: errstring = consist(net, 'rbf', x); Daniel@0: otherwise Daniel@0: error(['Unknown output function ', net.outfn]); Daniel@0: end Daniel@0: if ~isempty(errstring); Daniel@0: error(errstring); Daniel@0: end Daniel@0: Daniel@0: ndata = size(x, 1); Daniel@0: Daniel@0: [y, z, n2] = rbffwd(net, x); Daniel@0: Daniel@0: switch net.outfn Daniel@0: case 'linear' Daniel@0: Daniel@0: % Sum squared error at output units Daniel@0: delout = y - t; Daniel@0: Daniel@0: gdata = rbfbkp(net, x, z, n2, delout); Daniel@0: [g, gdata, gprior] = gbayes(net, gdata); Daniel@0: Daniel@0: case 'neuroscale' Daniel@0: % Compute the error gradient with respect to outputs Daniel@0: y_dist = sqrt(dist2(y, y)); Daniel@0: D = (t - y_dist)./(y_dist+diag(ones(ndata, 1))); Daniel@0: temp = y'; Daniel@0: gradient = 2.*sum(kron(D, ones(1, net.nout)) .* ... Daniel@0: (repmat(y, 1, ndata) - repmat((temp(:))', ndata, 1)), 1); Daniel@0: gradient = (reshape(gradient, net.nout, ndata))'; Daniel@0: % Compute the error gradient Daniel@0: gdata = rbfbkp(net, x, z, n2, gradient); Daniel@0: [g, gdata, gprior] = gbayes(net, gdata); Daniel@0: otherwise Daniel@0: error(['Unknown output function ', net.outfn]); Daniel@0: end Daniel@0: