wolffd@0: wolffd@0: wolffd@0: wolffd@0: Netlab Reference Manual fevbayes wolffd@0: wolffd@0: wolffd@0: wolffd@0:

fevbayes wolffd@0:

wolffd@0:

wolffd@0: Purpose wolffd@0:

wolffd@0: Evaluate Bayesian regularisation for network forward propagation. wolffd@0: wolffd@0:

wolffd@0: Synopsis wolffd@0:

wolffd@0:
wolffd@0: extra = fevbayes(net, y, a, x, t, x_test)
wolffd@0: [extra, invhess] = fevbayes(net, y, a, x, t, x_test, invhess)
wolffd@0: 
wolffd@0: wolffd@0: wolffd@0:

wolffd@0: Description wolffd@0:

wolffd@0: extra = fevbayes(net, y, a, x, t, x_test) takes a network data structure wolffd@0: net together with a set of hidden unit activations a from wolffd@0: test inputs x_test, training data inputs x and t and wolffd@0: outputs a matrix of extra information extra that consists of wolffd@0: error bars (variance) wolffd@0: for a regression problem or moderated outputs for a classification problem. wolffd@0: The optional argument (and return value) wolffd@0: invhess is the inverse of the network Hessian wolffd@0: computed on the training data inputs and targets. Passing it in avoids wolffd@0: recomputing it, which can be a significant saving for large training sets. wolffd@0: wolffd@0:

This is called by network-specific functions such as mlpevfwd which wolffd@0: are needed since the return values (predictions and hidden unit activations) wolffd@0: for different network types are in different orders (for good reasons). wolffd@0: wolffd@0:

wolffd@0: See Also wolffd@0:

wolffd@0: mlpevfwd, rbfevfwd, glmevfwd
wolffd@0: Pages: wolffd@0: Index wolffd@0:
wolffd@0:

Copyright (c) Ian T Nabney (1996-9) wolffd@0: wolffd@0: wolffd@0: wolffd@0: