annotate toolboxes/FullBNT-1.0.7/netlab3.3/Contents.m @ 0:e9a9cd732c1e tip

first hg version after svn
author wolffd
date Tue, 10 Feb 2015 15:05:51 +0000
parents
children
rev   line source
wolffd@0 1 % Netlab Toolbox
wolffd@0 2 % Version 3.3.1 18-Jun-2004
wolffd@0 3 %
wolffd@0 4 % conffig - Display a confusion matrix.
wolffd@0 5 % confmat - Compute a confusion matrix.
wolffd@0 6 % conjgrad - Conjugate gradients optimization.
wolffd@0 7 % consist - Check that arguments are consistent.
wolffd@0 8 % convertoldnet- Convert pre-2.3 release MLP and MDN nets to new format
wolffd@0 9 % datread - Read data from an ascii file.
wolffd@0 10 % datwrite - Write data to ascii file.
wolffd@0 11 % dem2ddat - Generates two dimensional data for demos.
wolffd@0 12 % demard - Automatic relevance determination using the MLP.
wolffd@0 13 % demev1 - Demonstrate Bayesian regression for the MLP.
wolffd@0 14 % demev2 - Demonstrate Bayesian classification for the MLP.
wolffd@0 15 % demev3 - Demonstrate Bayesian regression for the RBF.
wolffd@0 16 % demgauss - Demonstrate sampling from Gaussian distributions.
wolffd@0 17 % demglm1 - Demonstrate simple classification using a generalized linear model.
wolffd@0 18 % demglm2 - Demonstrate simple classification using a generalized linear model.
wolffd@0 19 % demgmm1 - Demonstrate density modelling with a Gaussian mixture model.
wolffd@0 20 % demgmm3 - Demonstrate density modelling with a Gaussian mixture model.
wolffd@0 21 % demgmm4 - Demonstrate density modelling with a Gaussian mixture model.
wolffd@0 22 % demgmm5 - Demonstrate density modelling with a PPCA mixture model.
wolffd@0 23 % demgp - Demonstrate simple regression using a Gaussian Process.
wolffd@0 24 % demgpard - Demonstrate ARD using a Gaussian Process.
wolffd@0 25 % demgpot - Computes the gradient of the negative log likelihood for a mixture model.
wolffd@0 26 % demgtm1 - Demonstrate EM for GTM.
wolffd@0 27 % demgtm2 - Demonstrate GTM for visualisation.
wolffd@0 28 % demhint - Demonstration of Hinton diagram for 2-layer feed-forward network.
wolffd@0 29 % demhmc1 - Demonstrate Hybrid Monte Carlo sampling on mixture of two Gaussians.
wolffd@0 30 % demhmc2 - Demonstrate Bayesian regression with Hybrid Monte Carlo sampling.
wolffd@0 31 % demhmc3 - Demonstrate Bayesian regression with Hybrid Monte Carlo sampling.
wolffd@0 32 % demkmean - Demonstrate simple clustering model trained with K-means.
wolffd@0 33 % demknn1 - Demonstrate nearest neighbour classifier.
wolffd@0 34 % demmdn1 - Demonstrate fitting a multi-valued function using a Mixture Density Network.
wolffd@0 35 % demmet1 - Demonstrate Markov Chain Monte Carlo sampling on a Gaussian.
wolffd@0 36 % demmlp1 - Demonstrate simple regression using a multi-layer perceptron
wolffd@0 37 % demmlp2 - Demonstrate simple classification using a multi-layer perceptron
wolffd@0 38 % demnlab - A front-end Graphical User Interface to the demos
wolffd@0 39 % demns1 - Demonstrate Neuroscale for visualisation.
wolffd@0 40 % demolgd1 - Demonstrate simple MLP optimisation with on-line gradient descent
wolffd@0 41 % demopt1 - Demonstrate different optimisers on Rosenbrock's function.
wolffd@0 42 % dempot - Computes the negative log likelihood for a mixture model.
wolffd@0 43 % demprgp - Demonstrate sampling from a Gaussian Process prior.
wolffd@0 44 % demprior - Demonstrate sampling from a multi-parameter Gaussian prior.
wolffd@0 45 % demrbf1 - Demonstrate simple regression using a radial basis function network.
wolffd@0 46 % demsom1 - Demonstrate SOM for visualisation.
wolffd@0 47 % demtrain - Demonstrate training of MLP network.
wolffd@0 48 % dist2 - Calculates squared distance between two sets of points.
wolffd@0 49 % eigdec - Sorted eigendecomposition
wolffd@0 50 % errbayes - Evaluate Bayesian error function for network.
wolffd@0 51 % evidence - Re-estimate hyperparameters using evidence approximation.
wolffd@0 52 % fevbayes - Evaluate Bayesian regularisation for network forward propagation.
wolffd@0 53 % gauss - Evaluate a Gaussian distribution.
wolffd@0 54 % gbayes - Evaluate gradient of Bayesian error function for network.
wolffd@0 55 % glm - Create a generalized linear model.
wolffd@0 56 % glmderiv - Evaluate derivatives of GLM outputs with respect to weights.
wolffd@0 57 % glmerr - Evaluate error function for generalized linear model.
wolffd@0 58 % glmevfwd - Forward propagation with evidence for GLM
wolffd@0 59 % glmfwd - Forward propagation through generalized linear model.
wolffd@0 60 % glmgrad - Evaluate gradient of error function for generalized linear model.
wolffd@0 61 % glmhess - Evaluate the Hessian matrix for a generalised linear model.
wolffd@0 62 % glminit - Initialise the weights in a generalized linear model.
wolffd@0 63 % glmpak - Combines weights and biases into one weights vector.
wolffd@0 64 % glmtrain - Specialised training of generalized linear model
wolffd@0 65 % glmunpak - Separates weights vector into weight and bias matrices.
wolffd@0 66 % gmm - Creates a Gaussian mixture model with specified architecture.
wolffd@0 67 % gmmactiv - Computes the activations of a Gaussian mixture model.
wolffd@0 68 % gmmem - EM algorithm for Gaussian mixture model.
wolffd@0 69 % gmminit - Initialises Gaussian mixture model from data
wolffd@0 70 % gmmpak - Combines all the parameters in a Gaussian mixture model into one vector.
wolffd@0 71 % gmmpost - Computes the class posterior probabilities of a Gaussian mixture model.
wolffd@0 72 % gmmprob - Computes the data probability for a Gaussian mixture model.
wolffd@0 73 % gmmsamp - Sample from a Gaussian mixture distribution.
wolffd@0 74 % gmmunpak - Separates a vector of Gaussian mixture model parameters into its components.
wolffd@0 75 % gp - Create a Gaussian Process.
wolffd@0 76 % gpcovar - Calculate the covariance for a Gaussian Process.
wolffd@0 77 % gpcovarf - Calculate the covariance function for a Gaussian Process.
wolffd@0 78 % gpcovarp - Calculate the prior covariance for a Gaussian Process.
wolffd@0 79 % gperr - Evaluate error function for Gaussian Process.
wolffd@0 80 % gpfwd - Forward propagation through Gaussian Process.
wolffd@0 81 % gpgrad - Evaluate error gradient for Gaussian Process.
wolffd@0 82 % gpinit - Initialise Gaussian Process model.
wolffd@0 83 % gppak - Combines GP hyperparameters into one vector.
wolffd@0 84 % gpunpak - Separates hyperparameter vector into components.
wolffd@0 85 % gradchek - Checks a user-defined gradient function using finite differences.
wolffd@0 86 % graddesc - Gradient descent optimization.
wolffd@0 87 % gsamp - Sample from a Gaussian distribution.
wolffd@0 88 % gtm - Create a Generative Topographic Map.
wolffd@0 89 % gtmem - EM algorithm for Generative Topographic Mapping.
wolffd@0 90 % gtmfwd - Forward propagation through GTM.
wolffd@0 91 % gtminit - Initialise the weights and latent sample in a GTM.
wolffd@0 92 % gtmlmean - Mean responsibility for data in a GTM.
wolffd@0 93 % gtmlmode - Mode responsibility for data in a GTM.
wolffd@0 94 % gtmmag - Magnification factors for a GTM
wolffd@0 95 % gtmpost - Latent space responsibility for data in a GTM.
wolffd@0 96 % gtmprob - Probability for data under a GTM.
wolffd@0 97 % hbayes - Evaluate Hessian of Bayesian error function for network.
wolffd@0 98 % hesschek - Use central differences to confirm correct evaluation of Hessian matrix.
wolffd@0 99 % hintmat - Evaluates the coordinates of the patches for a Hinton diagram.
wolffd@0 100 % hinton - Plot Hinton diagram for a weight matrix.
wolffd@0 101 % histp - Histogram estimate of 1-dimensional probability distribution.
wolffd@0 102 % hmc - Hybrid Monte Carlo sampling.
wolffd@0 103 % kmeans - Trains a k means cluster model.
wolffd@0 104 % knn - Creates a K-nearest-neighbour classifier.
wolffd@0 105 % knnfwd - Forward propagation through a K-nearest-neighbour classifier.
wolffd@0 106 % linef - Calculate function value along a line.
wolffd@0 107 % linemin - One dimensional minimization.
wolffd@0 108 % maxitmess- Create a standard error message when training reaches max. iterations.
wolffd@0 109 % mdn - Creates a Mixture Density Network with specified architecture.
wolffd@0 110 % mdn2gmm - Converts an MDN mixture data structure to array of GMMs.
wolffd@0 111 % mdndist2 - Calculates squared distance between centres of Gaussian kernels and data
wolffd@0 112 % mdnerr - Evaluate error function for Mixture Density Network.
wolffd@0 113 % mdnfwd - Forward propagation through Mixture Density Network.
wolffd@0 114 % mdngrad - Evaluate gradient of error function for Mixture Density Network.
wolffd@0 115 % mdninit - Initialise the weights in a Mixture Density Network.
wolffd@0 116 % mdnpak - Combines weights and biases into one weights vector.
wolffd@0 117 % mdnpost - Computes the posterior probability for each MDN mixture component.
wolffd@0 118 % mdnprob - Computes the data probability likelihood for an MDN mixture structure.
wolffd@0 119 % mdnunpak - Separates weights vector into weight and bias matrices.
wolffd@0 120 % metrop - Markov Chain Monte Carlo sampling with Metropolis algorithm.
wolffd@0 121 % minbrack - Bracket a minimum of a function of one variable.
wolffd@0 122 % mlp - Create a 2-layer feedforward network.
wolffd@0 123 % mlpbkp - Backpropagate gradient of error function for 2-layer network.
wolffd@0 124 % mlpderiv - Evaluate derivatives of network outputs with respect to weights.
wolffd@0 125 % mlperr - Evaluate error function for 2-layer network.
wolffd@0 126 % mlpevfwd - Forward propagation with evidence for MLP
wolffd@0 127 % mlpfwd - Forward propagation through 2-layer network.
wolffd@0 128 % mlpgrad - Evaluate gradient of error function for 2-layer network.
wolffd@0 129 % mlphdotv - Evaluate the product of the data Hessian with a vector.
wolffd@0 130 % mlphess - Evaluate the Hessian matrix for a multi-layer perceptron network.
wolffd@0 131 % mlphint - Plot Hinton diagram for 2-layer feed-forward network.
wolffd@0 132 % mlpinit - Initialise the weights in a 2-layer feedforward network.
wolffd@0 133 % mlppak - Combines weights and biases into one weights vector.
wolffd@0 134 % mlpprior - Create Gaussian prior for mlp.
wolffd@0 135 % mlptrain - Utility to train an MLP network for demtrain
wolffd@0 136 % mlpunpak - Separates weights vector into weight and bias matrices.
wolffd@0 137 % netderiv - Evaluate derivatives of network outputs by weights generically.
wolffd@0 138 % neterr - Evaluate network error function for generic optimizers
wolffd@0 139 % netevfwd - Generic forward propagation with evidence for network
wolffd@0 140 % netgrad - Evaluate network error gradient for generic optimizers
wolffd@0 141 % nethess - Evaluate network Hessian
wolffd@0 142 % netinit - Initialise the weights in a network.
wolffd@0 143 % netopt - Optimize the weights in a network model.
wolffd@0 144 % netpak - Combines weights and biases into one weights vector.
wolffd@0 145 % netunpak - Separates weights vector into weight and bias matrices.
wolffd@0 146 % olgd - On-line gradient descent optimization.
wolffd@0 147 % pca - Principal Components Analysis
wolffd@0 148 % plotmat - Display a matrix.
wolffd@0 149 % ppca - Probabilistic Principal Components Analysis
wolffd@0 150 % quasinew - Quasi-Newton optimization.
wolffd@0 151 % rbf - Creates an RBF network with specified architecture
wolffd@0 152 % rbfbkp - Backpropagate gradient of error function for RBF network.
wolffd@0 153 % rbfderiv - Evaluate derivatives of RBF network outputs with respect to weights.
wolffd@0 154 % rbferr - Evaluate error function for RBF network.
wolffd@0 155 % rbfevfwd - Forward propagation with evidence for RBF
wolffd@0 156 % rbffwd - Forward propagation through RBF network with linear outputs.
wolffd@0 157 % rbfgrad - Evaluate gradient of error function for RBF network.
wolffd@0 158 % rbfhess - Evaluate the Hessian matrix for RBF network.
wolffd@0 159 % rbfjacob - Evaluate derivatives of RBF network outputs with respect to inputs.
wolffd@0 160 % rbfpak - Combines all the parameters in an RBF network into one weights vector.
wolffd@0 161 % rbfprior - Create Gaussian prior and output layer mask for RBF.
wolffd@0 162 % rbfsetbf - Set basis functions of RBF from data.
wolffd@0 163 % rbfsetfw - Set basis function widths of RBF.
wolffd@0 164 % rbftrain - Two stage training of RBF network.
wolffd@0 165 % rbfunpak - Separates a vector of RBF weights into its components.
wolffd@0 166 % rosegrad - Calculate gradient of Rosenbrock's function.
wolffd@0 167 % rosen - Calculate Rosenbrock's function.
wolffd@0 168 % scg - Scaled conjugate gradient optimization.
wolffd@0 169 % som - Creates a Self-Organising Map.
wolffd@0 170 % somfwd - Forward propagation through a Self-Organising Map.
wolffd@0 171 % sompak - Combines node weights into one weights matrix.
wolffd@0 172 % somtrain - Kohonen training algorithm for SOM.
wolffd@0 173 % somunpak - Replaces node weights in SOM.
wolffd@0 174 %
wolffd@0 175 % Copyright (c) Ian T Nabney (1996-2001)
wolffd@0 176 %