Mercurial > hg > camir-aes2014
comparison toolboxes/FullBNT-1.0.7/nethelp3.3/mdn.htm @ 0:e9a9cd732c1e tip
first hg version after svn
author | wolffd |
---|---|
date | Tue, 10 Feb 2015 15:05:51 +0000 |
parents | |
children |
comparison
equal
deleted
inserted
replaced
-1:000000000000 | 0:e9a9cd732c1e |
---|---|
1 <html> | |
2 <head> | |
3 <title> | |
4 Netlab Reference Manual mdn | |
5 </title> | |
6 </head> | |
7 <body> | |
8 <H1> mdn | |
9 </H1> | |
10 <h2> | |
11 Purpose | |
12 </h2> | |
13 Creates a Mixture Density Network with specified architecture. | |
14 | |
15 <p><h2> | |
16 Synopsis | |
17 </h2> | |
18 <PRE> | |
19 net = mdn(nin, nhidden, ncentres, dimtarget) | |
20 net = mdn(nin, nhidden, ncentres, dimtarget, mixtype, ... | |
21 prior, beta) | |
22 </PRE> | |
23 | |
24 | |
25 <p><h2> | |
26 Description | |
27 </h2> | |
28 <CODE>net = mdn(nin, nhidden, ncentres, dimtarget)</CODE> takes the number of | |
29 inputs, | |
30 hidden units for a 2-layer feed-forward | |
31 network and the number of centres and target dimension for the | |
32 mixture model whose parameters are set from the outputs of the neural network. | |
33 The fifth argument <CODE>mixtype</CODE> is used to define the type of mixture | |
34 model. (Currently there is only one type supported: a mixture of Gaussians with | |
35 a single covariance parameter for each component.) For this model, | |
36 the mixture coefficients are computed from a group of softmax outputs, | |
37 the centres are equal to a group of linear outputs, and the variances are | |
38 obtained by applying the exponential function to a third group of outputs. | |
39 | |
40 <p>The network is initialised by a call to <CODE>mlp</CODE>, and the arguments | |
41 <CODE>prior</CODE>, and <CODE>beta</CODE> have the same role as for that function. | |
42 Weight initialisation uses the Matlab function <CODE>randn</CODE> | |
43 and so the seed for the random weight initialization can be | |
44 set using <CODE>randn('state', s)</CODE> where <CODE>s</CODE> is the seed value. | |
45 A specialised data structure (rather than <CODE>gmm</CODE>) | |
46 is used for the mixture model outputs to improve | |
47 the efficiency of error and gradient calculations in network training. | |
48 The fields are described in <CODE>mdnfwd</CODE> where they are set up. | |
49 | |
50 <p>The fields in <CODE>net</CODE> are | |
51 <PRE> | |
52 | |
53 type = 'mdn' | |
54 nin = number of input variables | |
55 nout = dimension of target space (not number of network outputs) | |
56 nwts = total number of weights and biases | |
57 mdnmixes = data structure for mixture model output | |
58 mlp = data structure for MLP network | |
59 </PRE> | |
60 | |
61 | |
62 <p><h2> | |
63 Example | |
64 </h2> | |
65 <PRE> | |
66 | |
67 net = mdn(2, 4, 3, 1, 'spherical'); | |
68 </PRE> | |
69 | |
70 This creates a Mixture Density Network with 2 inputs and 4 hidden units. | |
71 The mixture model has 3 components and the target space has dimension 1. | |
72 | |
73 <p><h2> | |
74 See Also | |
75 </h2> | |
76 <CODE><a href="mdnfwd.htm">mdnfwd</a></CODE>, <CODE><a href="mdnerr.htm">mdnerr</a></CODE>, <CODE><a href="mdn2gmm.htm">mdn2gmm</a></CODE>, <CODE><a href="mdngrad.htm">mdngrad</a></CODE>, <CODE><a href="mdnpak.htm">mdnpak</a></CODE>, <CODE><a href="mdnunpak.htm">mdnunpak</a></CODE>, <CODE><a href="mlp.htm">mlp</a></CODE><hr> | |
77 <b>Pages:</b> | |
78 <a href="index.htm">Index</a> | |
79 <hr> | |
80 <p>Copyright (c) Ian T Nabney (1996-9) | |
81 <p>David J Evans (1998) | |
82 | |
83 </body> | |
84 </html> |