Mercurial > hg > camir-aes2014
comparison toolboxes/FullBNT-1.0.7/netlab3.3/mdninit.m @ 0:e9a9cd732c1e tip
first hg version after svn
author | wolffd |
---|---|
date | Tue, 10 Feb 2015 15:05:51 +0000 |
parents | |
children |
comparison
equal
deleted
inserted
replaced
-1:000000000000 | 0:e9a9cd732c1e |
---|---|
1 function net = mdninit(net, prior, t, options) | |
2 %MDNINIT Initialise the weights in a Mixture Density Network. | |
3 % | |
4 % Description | |
5 % | |
6 % NET = MDNINIT(NET, PRIOR) takes a Mixture Density Network NET and | |
7 % sets the weights and biases by sampling from a Gaussian distribution. | |
8 % It calls MLPINIT for the MLP component of NET. | |
9 % | |
10 % NET = MDNINIT(NET, PRIOR, T, OPTIONS) uses the target data T to | |
11 % initialise the biases for the output units after initialising the | |
12 % other weights as above. It calls GMMINIT, with T and OPTIONS as | |
13 % arguments, to obtain a model of the unconditional density of T. The | |
14 % biases are then set so that NET will output the values in the | |
15 % Gaussian mixture model. | |
16 % | |
17 % See also | |
18 % MDN, MLP, MLPINIT, GMMINIT | |
19 % | |
20 | |
21 % Copyright (c) Ian T Nabney (1996-2001) | |
22 % David J Evans (1998) | |
23 | |
24 % Initialise network weights from prior: this gives noise around values | |
25 % determined later | |
26 net.mlp = mlpinit(net.mlp, prior); | |
27 | |
28 if nargin > 2 | |
29 % Initialise priors, centres and variances from target data | |
30 temp_mix = gmm(net.mdnmixes.dim_target, net.mdnmixes.ncentres, 'spherical'); | |
31 temp_mix = gmminit(temp_mix, t, options); | |
32 | |
33 ncentres = net.mdnmixes.ncentres; | |
34 dim_target = net.mdnmixes.dim_target; | |
35 | |
36 % Now set parameters in MLP to yield the right values. | |
37 % This involves setting the biases correctly. | |
38 | |
39 % Priors | |
40 net.mlp.b2(1:ncentres) = temp_mix.priors; | |
41 | |
42 % Centres are arranged in mlp such that we have | |
43 % u11, u12, u13, ..., u1c, ... , uj1, uj2, uj3, ..., ujc, ..., um1, uM2, | |
44 % ..., uMc | |
45 % This is achieved by transposing temp_mix.centres before reshaping | |
46 end_centres = ncentres*(dim_target+1); | |
47 net.mlp.b2(ncentres+1:end_centres) = ... | |
48 reshape(temp_mix.centres', 1, ncentres*dim_target); | |
49 | |
50 % Variances | |
51 net.mlp.b2((end_centres+1):net.mlp.nout) = ... | |
52 log(temp_mix.covars); | |
53 end |