annotate toolboxes/FullBNT-1.0.7/nethelp3.3/mdn.htm @ 0:cc4b1211e677 tip

initial commit to HG from Changeset: 646 (e263d8a21543) added further path and more save "camirversion.m"
author Daniel Wolff
date Fri, 19 Aug 2016 13:07:06 +0200
parents
children
rev   line source
Daniel@0 1 <html>
Daniel@0 2 <head>
Daniel@0 3 <title>
Daniel@0 4 Netlab Reference Manual mdn
Daniel@0 5 </title>
Daniel@0 6 </head>
Daniel@0 7 <body>
Daniel@0 8 <H1> mdn
Daniel@0 9 </H1>
Daniel@0 10 <h2>
Daniel@0 11 Purpose
Daniel@0 12 </h2>
Daniel@0 13 Creates a Mixture Density Network with specified architecture.
Daniel@0 14
Daniel@0 15 <p><h2>
Daniel@0 16 Synopsis
Daniel@0 17 </h2>
Daniel@0 18 <PRE>
Daniel@0 19 net = mdn(nin, nhidden, ncentres, dimtarget)
Daniel@0 20 net = mdn(nin, nhidden, ncentres, dimtarget, mixtype, ...
Daniel@0 21 prior, beta)
Daniel@0 22 </PRE>
Daniel@0 23
Daniel@0 24
Daniel@0 25 <p><h2>
Daniel@0 26 Description
Daniel@0 27 </h2>
Daniel@0 28 <CODE>net = mdn(nin, nhidden, ncentres, dimtarget)</CODE> takes the number of
Daniel@0 29 inputs,
Daniel@0 30 hidden units for a 2-layer feed-forward
Daniel@0 31 network and the number of centres and target dimension for the
Daniel@0 32 mixture model whose parameters are set from the outputs of the neural network.
Daniel@0 33 The fifth argument <CODE>mixtype</CODE> is used to define the type of mixture
Daniel@0 34 model. (Currently there is only one type supported: a mixture of Gaussians with
Daniel@0 35 a single covariance parameter for each component.) For this model,
Daniel@0 36 the mixture coefficients are computed from a group of softmax outputs,
Daniel@0 37 the centres are equal to a group of linear outputs, and the variances are
Daniel@0 38 obtained by applying the exponential function to a third group of outputs.
Daniel@0 39
Daniel@0 40 <p>The network is initialised by a call to <CODE>mlp</CODE>, and the arguments
Daniel@0 41 <CODE>prior</CODE>, and <CODE>beta</CODE> have the same role as for that function.
Daniel@0 42 Weight initialisation uses the Matlab function <CODE>randn</CODE>
Daniel@0 43 and so the seed for the random weight initialization can be
Daniel@0 44 set using <CODE>randn('state', s)</CODE> where <CODE>s</CODE> is the seed value.
Daniel@0 45 A specialised data structure (rather than <CODE>gmm</CODE>)
Daniel@0 46 is used for the mixture model outputs to improve
Daniel@0 47 the efficiency of error and gradient calculations in network training.
Daniel@0 48 The fields are described in <CODE>mdnfwd</CODE> where they are set up.
Daniel@0 49
Daniel@0 50 <p>The fields in <CODE>net</CODE> are
Daniel@0 51 <PRE>
Daniel@0 52
Daniel@0 53 type = 'mdn'
Daniel@0 54 nin = number of input variables
Daniel@0 55 nout = dimension of target space (not number of network outputs)
Daniel@0 56 nwts = total number of weights and biases
Daniel@0 57 mdnmixes = data structure for mixture model output
Daniel@0 58 mlp = data structure for MLP network
Daniel@0 59 </PRE>
Daniel@0 60
Daniel@0 61
Daniel@0 62 <p><h2>
Daniel@0 63 Example
Daniel@0 64 </h2>
Daniel@0 65 <PRE>
Daniel@0 66
Daniel@0 67 net = mdn(2, 4, 3, 1, 'spherical');
Daniel@0 68 </PRE>
Daniel@0 69
Daniel@0 70 This creates a Mixture Density Network with 2 inputs and 4 hidden units.
Daniel@0 71 The mixture model has 3 components and the target space has dimension 1.
Daniel@0 72
Daniel@0 73 <p><h2>
Daniel@0 74 See Also
Daniel@0 75 </h2>
Daniel@0 76 <CODE><a href="mdnfwd.htm">mdnfwd</a></CODE>, <CODE><a href="mdnerr.htm">mdnerr</a></CODE>, <CODE><a href="mdn2gmm.htm">mdn2gmm</a></CODE>, <CODE><a href="mdngrad.htm">mdngrad</a></CODE>, <CODE><a href="mdnpak.htm">mdnpak</a></CODE>, <CODE><a href="mdnunpak.htm">mdnunpak</a></CODE>, <CODE><a href="mlp.htm">mlp</a></CODE><hr>
Daniel@0 77 <b>Pages:</b>
Daniel@0 78 <a href="index.htm">Index</a>
Daniel@0 79 <hr>
Daniel@0 80 <p>Copyright (c) Ian T Nabney (1996-9)
Daniel@0 81 <p>David J Evans (1998)
Daniel@0 82
Daniel@0 83 </body>
Daniel@0 84 </html>