annotate toolboxes/FullBNT-1.0.7/netlab3.3/rbfgrad.m @ 0:cc4b1211e677 tip

initial commit to HG from Changeset: 646 (e263d8a21543) added further path and more save "camirversion.m"
author Daniel Wolff
date Fri, 19 Aug 2016 13:07:06 +0200
parents
children
rev   line source
Daniel@0 1 function [g, gdata, gprior] = rbfgrad(net, x, t)
Daniel@0 2 %RBFGRAD Evaluate gradient of error function for RBF network.
Daniel@0 3 %
Daniel@0 4 % Description
Daniel@0 5 % G = RBFGRAD(NET, X, T) takes a network data structure NET together
Daniel@0 6 % with a matrix X of input vectors and a matrix T of target vectors,
Daniel@0 7 % and evaluates the gradient G of the error function with respect to
Daniel@0 8 % the network weights (i.e. including the hidden unit parameters). The
Daniel@0 9 % error function is sum of squares. Each row of X corresponds to one
Daniel@0 10 % input vector and each row of T contains the corresponding target
Daniel@0 11 % vector. If the output function is 'NEUROSCALE' then the gradient is
Daniel@0 12 % only computed for the output layer weights and biases.
Daniel@0 13 %
Daniel@0 14 % [G, GDATA, GPRIOR] = RBFGRAD(NET, X, T) also returns separately the
Daniel@0 15 % data and prior contributions to the gradient. In the case of multiple
Daniel@0 16 % groups in the prior, GPRIOR is a matrix with a row for each group and
Daniel@0 17 % a column for each weight parameter.
Daniel@0 18 %
Daniel@0 19 % See also
Daniel@0 20 % RBF, RBFFWD, RBFERR, RBFPAK, RBFUNPAK, RBFBKP
Daniel@0 21 %
Daniel@0 22
Daniel@0 23 % Copyright (c) Ian T Nabney (1996-2001)
Daniel@0 24
Daniel@0 25 % Check arguments for consistency
Daniel@0 26 switch net.outfn
Daniel@0 27 case 'linear'
Daniel@0 28 errstring = consist(net, 'rbf', x, t);
Daniel@0 29 case 'neuroscale'
Daniel@0 30 errstring = consist(net, 'rbf', x);
Daniel@0 31 otherwise
Daniel@0 32 error(['Unknown output function ', net.outfn]);
Daniel@0 33 end
Daniel@0 34 if ~isempty(errstring);
Daniel@0 35 error(errstring);
Daniel@0 36 end
Daniel@0 37
Daniel@0 38 ndata = size(x, 1);
Daniel@0 39
Daniel@0 40 [y, z, n2] = rbffwd(net, x);
Daniel@0 41
Daniel@0 42 switch net.outfn
Daniel@0 43 case 'linear'
Daniel@0 44
Daniel@0 45 % Sum squared error at output units
Daniel@0 46 delout = y - t;
Daniel@0 47
Daniel@0 48 gdata = rbfbkp(net, x, z, n2, delout);
Daniel@0 49 [g, gdata, gprior] = gbayes(net, gdata);
Daniel@0 50
Daniel@0 51 case 'neuroscale'
Daniel@0 52 % Compute the error gradient with respect to outputs
Daniel@0 53 y_dist = sqrt(dist2(y, y));
Daniel@0 54 D = (t - y_dist)./(y_dist+diag(ones(ndata, 1)));
Daniel@0 55 temp = y';
Daniel@0 56 gradient = 2.*sum(kron(D, ones(1, net.nout)) .* ...
Daniel@0 57 (repmat(y, 1, ndata) - repmat((temp(:))', ndata, 1)), 1);
Daniel@0 58 gradient = (reshape(gradient, net.nout, ndata))';
Daniel@0 59 % Compute the error gradient
Daniel@0 60 gdata = rbfbkp(net, x, z, n2, gradient);
Daniel@0 61 [g, gdata, gprior] = gbayes(net, gdata);
Daniel@0 62 otherwise
Daniel@0 63 error(['Unknown output function ', net.outfn]);
Daniel@0 64 end
Daniel@0 65