Mercurial > hg > camir-aes2014
comparison toolboxes/FullBNT-1.0.7/netlab3.3/Contents.m @ 0:e9a9cd732c1e tip
first hg version after svn
author | wolffd |
---|---|
date | Tue, 10 Feb 2015 15:05:51 +0000 |
parents | |
children |
comparison
equal
deleted
inserted
replaced
-1:000000000000 | 0:e9a9cd732c1e |
---|---|
1 % Netlab Toolbox | |
2 % Version 3.3.1 18-Jun-2004 | |
3 % | |
4 % conffig - Display a confusion matrix. | |
5 % confmat - Compute a confusion matrix. | |
6 % conjgrad - Conjugate gradients optimization. | |
7 % consist - Check that arguments are consistent. | |
8 % convertoldnet- Convert pre-2.3 release MLP and MDN nets to new format | |
9 % datread - Read data from an ascii file. | |
10 % datwrite - Write data to ascii file. | |
11 % dem2ddat - Generates two dimensional data for demos. | |
12 % demard - Automatic relevance determination using the MLP. | |
13 % demev1 - Demonstrate Bayesian regression for the MLP. | |
14 % demev2 - Demonstrate Bayesian classification for the MLP. | |
15 % demev3 - Demonstrate Bayesian regression for the RBF. | |
16 % demgauss - Demonstrate sampling from Gaussian distributions. | |
17 % demglm1 - Demonstrate simple classification using a generalized linear model. | |
18 % demglm2 - Demonstrate simple classification using a generalized linear model. | |
19 % demgmm1 - Demonstrate density modelling with a Gaussian mixture model. | |
20 % demgmm3 - Demonstrate density modelling with a Gaussian mixture model. | |
21 % demgmm4 - Demonstrate density modelling with a Gaussian mixture model. | |
22 % demgmm5 - Demonstrate density modelling with a PPCA mixture model. | |
23 % demgp - Demonstrate simple regression using a Gaussian Process. | |
24 % demgpard - Demonstrate ARD using a Gaussian Process. | |
25 % demgpot - Computes the gradient of the negative log likelihood for a mixture model. | |
26 % demgtm1 - Demonstrate EM for GTM. | |
27 % demgtm2 - Demonstrate GTM for visualisation. | |
28 % demhint - Demonstration of Hinton diagram for 2-layer feed-forward network. | |
29 % demhmc1 - Demonstrate Hybrid Monte Carlo sampling on mixture of two Gaussians. | |
30 % demhmc2 - Demonstrate Bayesian regression with Hybrid Monte Carlo sampling. | |
31 % demhmc3 - Demonstrate Bayesian regression with Hybrid Monte Carlo sampling. | |
32 % demkmean - Demonstrate simple clustering model trained with K-means. | |
33 % demknn1 - Demonstrate nearest neighbour classifier. | |
34 % demmdn1 - Demonstrate fitting a multi-valued function using a Mixture Density Network. | |
35 % demmet1 - Demonstrate Markov Chain Monte Carlo sampling on a Gaussian. | |
36 % demmlp1 - Demonstrate simple regression using a multi-layer perceptron | |
37 % demmlp2 - Demonstrate simple classification using a multi-layer perceptron | |
38 % demnlab - A front-end Graphical User Interface to the demos | |
39 % demns1 - Demonstrate Neuroscale for visualisation. | |
40 % demolgd1 - Demonstrate simple MLP optimisation with on-line gradient descent | |
41 % demopt1 - Demonstrate different optimisers on Rosenbrock's function. | |
42 % dempot - Computes the negative log likelihood for a mixture model. | |
43 % demprgp - Demonstrate sampling from a Gaussian Process prior. | |
44 % demprior - Demonstrate sampling from a multi-parameter Gaussian prior. | |
45 % demrbf1 - Demonstrate simple regression using a radial basis function network. | |
46 % demsom1 - Demonstrate SOM for visualisation. | |
47 % demtrain - Demonstrate training of MLP network. | |
48 % dist2 - Calculates squared distance between two sets of points. | |
49 % eigdec - Sorted eigendecomposition | |
50 % errbayes - Evaluate Bayesian error function for network. | |
51 % evidence - Re-estimate hyperparameters using evidence approximation. | |
52 % fevbayes - Evaluate Bayesian regularisation for network forward propagation. | |
53 % gauss - Evaluate a Gaussian distribution. | |
54 % gbayes - Evaluate gradient of Bayesian error function for network. | |
55 % glm - Create a generalized linear model. | |
56 % glmderiv - Evaluate derivatives of GLM outputs with respect to weights. | |
57 % glmerr - Evaluate error function for generalized linear model. | |
58 % glmevfwd - Forward propagation with evidence for GLM | |
59 % glmfwd - Forward propagation through generalized linear model. | |
60 % glmgrad - Evaluate gradient of error function for generalized linear model. | |
61 % glmhess - Evaluate the Hessian matrix for a generalised linear model. | |
62 % glminit - Initialise the weights in a generalized linear model. | |
63 % glmpak - Combines weights and biases into one weights vector. | |
64 % glmtrain - Specialised training of generalized linear model | |
65 % glmunpak - Separates weights vector into weight and bias matrices. | |
66 % gmm - Creates a Gaussian mixture model with specified architecture. | |
67 % gmmactiv - Computes the activations of a Gaussian mixture model. | |
68 % gmmem - EM algorithm for Gaussian mixture model. | |
69 % gmminit - Initialises Gaussian mixture model from data | |
70 % gmmpak - Combines all the parameters in a Gaussian mixture model into one vector. | |
71 % gmmpost - Computes the class posterior probabilities of a Gaussian mixture model. | |
72 % gmmprob - Computes the data probability for a Gaussian mixture model. | |
73 % gmmsamp - Sample from a Gaussian mixture distribution. | |
74 % gmmunpak - Separates a vector of Gaussian mixture model parameters into its components. | |
75 % gp - Create a Gaussian Process. | |
76 % gpcovar - Calculate the covariance for a Gaussian Process. | |
77 % gpcovarf - Calculate the covariance function for a Gaussian Process. | |
78 % gpcovarp - Calculate the prior covariance for a Gaussian Process. | |
79 % gperr - Evaluate error function for Gaussian Process. | |
80 % gpfwd - Forward propagation through Gaussian Process. | |
81 % gpgrad - Evaluate error gradient for Gaussian Process. | |
82 % gpinit - Initialise Gaussian Process model. | |
83 % gppak - Combines GP hyperparameters into one vector. | |
84 % gpunpak - Separates hyperparameter vector into components. | |
85 % gradchek - Checks a user-defined gradient function using finite differences. | |
86 % graddesc - Gradient descent optimization. | |
87 % gsamp - Sample from a Gaussian distribution. | |
88 % gtm - Create a Generative Topographic Map. | |
89 % gtmem - EM algorithm for Generative Topographic Mapping. | |
90 % gtmfwd - Forward propagation through GTM. | |
91 % gtminit - Initialise the weights and latent sample in a GTM. | |
92 % gtmlmean - Mean responsibility for data in a GTM. | |
93 % gtmlmode - Mode responsibility for data in a GTM. | |
94 % gtmmag - Magnification factors for a GTM | |
95 % gtmpost - Latent space responsibility for data in a GTM. | |
96 % gtmprob - Probability for data under a GTM. | |
97 % hbayes - Evaluate Hessian of Bayesian error function for network. | |
98 % hesschek - Use central differences to confirm correct evaluation of Hessian matrix. | |
99 % hintmat - Evaluates the coordinates of the patches for a Hinton diagram. | |
100 % hinton - Plot Hinton diagram for a weight matrix. | |
101 % histp - Histogram estimate of 1-dimensional probability distribution. | |
102 % hmc - Hybrid Monte Carlo sampling. | |
103 % kmeans - Trains a k means cluster model. | |
104 % knn - Creates a K-nearest-neighbour classifier. | |
105 % knnfwd - Forward propagation through a K-nearest-neighbour classifier. | |
106 % linef - Calculate function value along a line. | |
107 % linemin - One dimensional minimization. | |
108 % maxitmess- Create a standard error message when training reaches max. iterations. | |
109 % mdn - Creates a Mixture Density Network with specified architecture. | |
110 % mdn2gmm - Converts an MDN mixture data structure to array of GMMs. | |
111 % mdndist2 - Calculates squared distance between centres of Gaussian kernels and data | |
112 % mdnerr - Evaluate error function for Mixture Density Network. | |
113 % mdnfwd - Forward propagation through Mixture Density Network. | |
114 % mdngrad - Evaluate gradient of error function for Mixture Density Network. | |
115 % mdninit - Initialise the weights in a Mixture Density Network. | |
116 % mdnpak - Combines weights and biases into one weights vector. | |
117 % mdnpost - Computes the posterior probability for each MDN mixture component. | |
118 % mdnprob - Computes the data probability likelihood for an MDN mixture structure. | |
119 % mdnunpak - Separates weights vector into weight and bias matrices. | |
120 % metrop - Markov Chain Monte Carlo sampling with Metropolis algorithm. | |
121 % minbrack - Bracket a minimum of a function of one variable. | |
122 % mlp - Create a 2-layer feedforward network. | |
123 % mlpbkp - Backpropagate gradient of error function for 2-layer network. | |
124 % mlpderiv - Evaluate derivatives of network outputs with respect to weights. | |
125 % mlperr - Evaluate error function for 2-layer network. | |
126 % mlpevfwd - Forward propagation with evidence for MLP | |
127 % mlpfwd - Forward propagation through 2-layer network. | |
128 % mlpgrad - Evaluate gradient of error function for 2-layer network. | |
129 % mlphdotv - Evaluate the product of the data Hessian with a vector. | |
130 % mlphess - Evaluate the Hessian matrix for a multi-layer perceptron network. | |
131 % mlphint - Plot Hinton diagram for 2-layer feed-forward network. | |
132 % mlpinit - Initialise the weights in a 2-layer feedforward network. | |
133 % mlppak - Combines weights and biases into one weights vector. | |
134 % mlpprior - Create Gaussian prior for mlp. | |
135 % mlptrain - Utility to train an MLP network for demtrain | |
136 % mlpunpak - Separates weights vector into weight and bias matrices. | |
137 % netderiv - Evaluate derivatives of network outputs by weights generically. | |
138 % neterr - Evaluate network error function for generic optimizers | |
139 % netevfwd - Generic forward propagation with evidence for network | |
140 % netgrad - Evaluate network error gradient for generic optimizers | |
141 % nethess - Evaluate network Hessian | |
142 % netinit - Initialise the weights in a network. | |
143 % netopt - Optimize the weights in a network model. | |
144 % netpak - Combines weights and biases into one weights vector. | |
145 % netunpak - Separates weights vector into weight and bias matrices. | |
146 % olgd - On-line gradient descent optimization. | |
147 % pca - Principal Components Analysis | |
148 % plotmat - Display a matrix. | |
149 % ppca - Probabilistic Principal Components Analysis | |
150 % quasinew - Quasi-Newton optimization. | |
151 % rbf - Creates an RBF network with specified architecture | |
152 % rbfbkp - Backpropagate gradient of error function for RBF network. | |
153 % rbfderiv - Evaluate derivatives of RBF network outputs with respect to weights. | |
154 % rbferr - Evaluate error function for RBF network. | |
155 % rbfevfwd - Forward propagation with evidence for RBF | |
156 % rbffwd - Forward propagation through RBF network with linear outputs. | |
157 % rbfgrad - Evaluate gradient of error function for RBF network. | |
158 % rbfhess - Evaluate the Hessian matrix for RBF network. | |
159 % rbfjacob - Evaluate derivatives of RBF network outputs with respect to inputs. | |
160 % rbfpak - Combines all the parameters in an RBF network into one weights vector. | |
161 % rbfprior - Create Gaussian prior and output layer mask for RBF. | |
162 % rbfsetbf - Set basis functions of RBF from data. | |
163 % rbfsetfw - Set basis function widths of RBF. | |
164 % rbftrain - Two stage training of RBF network. | |
165 % rbfunpak - Separates a vector of RBF weights into its components. | |
166 % rosegrad - Calculate gradient of Rosenbrock's function. | |
167 % rosen - Calculate Rosenbrock's function. | |
168 % scg - Scaled conjugate gradient optimization. | |
169 % som - Creates a Self-Organising Map. | |
170 % somfwd - Forward propagation through a Self-Organising Map. | |
171 % sompak - Combines node weights into one weights matrix. | |
172 % somtrain - Kohonen training algorithm for SOM. | |
173 % somunpak - Replaces node weights in SOM. | |
174 % | |
175 % Copyright (c) Ian T Nabney (1996-2001) | |
176 % |