Mercurial > hg > camir-aes2014
comparison toolboxes/FullBNT-1.0.7/netlab3.3/mdn.m @ 0:e9a9cd732c1e tip
first hg version after svn
author | wolffd |
---|---|
date | Tue, 10 Feb 2015 15:05:51 +0000 |
parents | |
children |
comparison
equal
deleted
inserted
replaced
-1:000000000000 | 0:e9a9cd732c1e |
---|---|
1 function net = mdn(nin, nhidden, ncentres, dim_target, mix_type, ... | |
2 prior, beta) | |
3 %MDN Creates a Mixture Density Network with specified architecture. | |
4 % | |
5 % Description | |
6 % NET = MDN(NIN, NHIDDEN, NCENTRES, DIMTARGET) takes the number of | |
7 % inputs, hidden units for a 2-layer feed-forward network and the | |
8 % number of centres and target dimension for the mixture model whose | |
9 % parameters are set from the outputs of the neural network. The fifth | |
10 % argument MIXTYPE is used to define the type of mixture model. | |
11 % (Currently there is only one type supported: a mixture of Gaussians | |
12 % with a single covariance parameter for each component.) For this | |
13 % model, the mixture coefficients are computed from a group of softmax | |
14 % outputs, the centres are equal to a group of linear outputs, and the | |
15 % variances are obtained by applying the exponential function to a | |
16 % third group of outputs. | |
17 % | |
18 % The network is initialised by a call to MLP, and the arguments PRIOR, | |
19 % and BETA have the same role as for that function. Weight | |
20 % initialisation uses the Matlab function RANDN and so the seed for | |
21 % the random weight initialization can be set using RANDN('STATE', S) | |
22 % where S is the seed value. A specialised data structure (rather than | |
23 % GMM) is used for the mixture model outputs to improve the efficiency | |
24 % of error and gradient calculations in network training. The fields | |
25 % are described in MDNFWD where they are set up. | |
26 % | |
27 % The fields in NET are | |
28 % | |
29 % type = 'mdn' | |
30 % nin = number of input variables | |
31 % nout = dimension of target space (not number of network outputs) | |
32 % nwts = total number of weights and biases | |
33 % mdnmixes = data structure for mixture model output | |
34 % mlp = data structure for MLP network | |
35 % | |
36 % See also | |
37 % MDNFWD, MDNERR, MDN2GMM, MDNGRAD, MDNPAK, MDNUNPAK, MLP | |
38 % | |
39 | |
40 % Copyright (c) Ian T Nabney (1996-2001) | |
41 % David J Evans (1998) | |
42 | |
43 % Currently ignore type argument: reserved for future use | |
44 net.type = 'mdn'; | |
45 | |
46 % Set up the mixture model part of the structure | |
47 % For efficiency we use a specialised data structure in place of GMM | |
48 mdnmixes.type = 'mdnmixes'; | |
49 mdnmixes.ncentres = ncentres; | |
50 mdnmixes.dim_target = dim_target; | |
51 | |
52 % This calculation depends on spherical variances | |
53 mdnmixes.nparams = ncentres + ncentres*dim_target + ncentres; | |
54 | |
55 % Make the weights in the mdnmixes structure null | |
56 mdnmixes.mixcoeffs = []; | |
57 mdnmixes.centres = []; | |
58 mdnmixes.covars = []; | |
59 | |
60 % Number of output nodes = number of parameters in mixture model | |
61 nout = mdnmixes.nparams; | |
62 | |
63 % Set up the MLP part of the network | |
64 if (nargin == 5) | |
65 mlpnet = mlp(nin, nhidden, nout, 'linear'); | |
66 elseif (nargin == 6) | |
67 mlpnet = mlp(nin, nhidden, nout, 'linear', prior); | |
68 elseif (nargin == 7) | |
69 mlpnet = mlp(nin, nhidden, nout, 'linear', prior, beta); | |
70 end | |
71 | |
72 % Create descriptor | |
73 net.mdnmixes = mdnmixes; | |
74 net.mlp = mlpnet; | |
75 net.nin = nin; | |
76 net.nout = dim_target; | |
77 net.nwts = mlpnet.nwts; |