annotate toolboxes/FullBNT-1.0.7/netlab3.3/fevbayes.m @ 0:cc4b1211e677 tip

initial commit to HG from Changeset: 646 (e263d8a21543) added further path and more save "camirversion.m"
author Daniel Wolff
date Fri, 19 Aug 2016 13:07:06 +0200
parents
children
rev   line source
Daniel@0 1 function [extra, invhess] = fevbayes(net, y, a, x, t, x_test, invhess)
Daniel@0 2 %FEVBAYES Evaluate Bayesian regularisation for network forward propagation.
Daniel@0 3 %
Daniel@0 4 % Description
Daniel@0 5 % EXTRA = FEVBAYES(NET, Y, A, X, T, X_TEST) takes a network data
Daniel@0 6 % structure NET together with a set of hidden unit activations A from
Daniel@0 7 % test inputs X_TEST, training data inputs X and T and outputs a matrix
Daniel@0 8 % of extra information EXTRA that consists of error bars (variance) for
Daniel@0 9 % a regression problem or moderated outputs for a classification
Daniel@0 10 % problem. The optional argument (and return value) INVHESS is the
Daniel@0 11 % inverse of the network Hessian computed on the training data inputs
Daniel@0 12 % and targets. Passing it in avoids recomputing it, which can be a
Daniel@0 13 % significant saving for large training sets.
Daniel@0 14 %
Daniel@0 15 % This is called by network-specific functions such as MLPEVFWD which
Daniel@0 16 % are needed since the return values (predictions and hidden unit
Daniel@0 17 % activations) for different network types are in different orders (for
Daniel@0 18 % good reasons).
Daniel@0 19 %
Daniel@0 20 % See also
Daniel@0 21 % MLPEVFWD, RBFEVFWD, GLMEVFWD
Daniel@0 22 %
Daniel@0 23
Daniel@0 24 % Copyright (c) Ian T Nabney (1996-2001)
Daniel@0 25
Daniel@0 26 w = netpak(net);
Daniel@0 27 g = netderiv(w, net, x_test);
Daniel@0 28 if nargin < 7
Daniel@0 29 % Need to compute inverse hessian
Daniel@0 30 hess = nethess(w, net, x, t);
Daniel@0 31 invhess = inv(hess);
Daniel@0 32 end
Daniel@0 33
Daniel@0 34 ntest = size(x_test, 1);
Daniel@0 35 var = zeros(ntest, 1);
Daniel@0 36 for idx = 1:1:net.nout,
Daniel@0 37 for n = 1:1:ntest,
Daniel@0 38 grad = squeeze(g(n,:,idx));
Daniel@0 39 var(n,idx) = grad*invhess*grad';
Daniel@0 40 end
Daniel@0 41 end
Daniel@0 42
Daniel@0 43 switch net.outfn
Daniel@0 44 case 'linear'
Daniel@0 45 % extra is variance
Daniel@0 46 extra = ones(size(var))./net.beta + var;
Daniel@0 47 case 'logistic'
Daniel@0 48 % extra is moderated output
Daniel@0 49 kappa = 1./(sqrt(ones(size(var)) + (pi.*var)./8));
Daniel@0 50 extra = 1./(1 + exp(-kappa.*a));
Daniel@0 51 case 'softmax'
Daniel@0 52 % Use extended Mackay formula; beware that this may not
Daniel@0 53 % be very accurate
Daniel@0 54 kappa = 1./(sqrt(ones(size(var)) + (pi.*var)./8));
Daniel@0 55 temp = exp(kappa.*a);
Daniel@0 56 extra = temp./(sum(temp, 2)*ones(1, net.nout));
Daniel@0 57 otherwise
Daniel@0 58 error(['Unknown activation function ', net.outfn]);
Daniel@0 59 end