annotate toolboxes/FullBNT-1.0.7/netlab3.3/mdn.m @ 0:cc4b1211e677 tip

initial commit to HG from Changeset: 646 (e263d8a21543) added further path and more save "camirversion.m"
author Daniel Wolff
date Fri, 19 Aug 2016 13:07:06 +0200
parents
children
rev   line source
Daniel@0 1 function net = mdn(nin, nhidden, ncentres, dim_target, mix_type, ...
Daniel@0 2 prior, beta)
Daniel@0 3 %MDN Creates a Mixture Density Network with specified architecture.
Daniel@0 4 %
Daniel@0 5 % Description
Daniel@0 6 % NET = MDN(NIN, NHIDDEN, NCENTRES, DIMTARGET) takes the number of
Daniel@0 7 % inputs, hidden units for a 2-layer feed-forward network and the
Daniel@0 8 % number of centres and target dimension for the mixture model whose
Daniel@0 9 % parameters are set from the outputs of the neural network. The fifth
Daniel@0 10 % argument MIXTYPE is used to define the type of mixture model.
Daniel@0 11 % (Currently there is only one type supported: a mixture of Gaussians
Daniel@0 12 % with a single covariance parameter for each component.) For this
Daniel@0 13 % model, the mixture coefficients are computed from a group of softmax
Daniel@0 14 % outputs, the centres are equal to a group of linear outputs, and the
Daniel@0 15 % variances are obtained by applying the exponential function to a
Daniel@0 16 % third group of outputs.
Daniel@0 17 %
Daniel@0 18 % The network is initialised by a call to MLP, and the arguments PRIOR,
Daniel@0 19 % and BETA have the same role as for that function. Weight
Daniel@0 20 % initialisation uses the Matlab function RANDN and so the seed for
Daniel@0 21 % the random weight initialization can be set using RANDN('STATE', S)
Daniel@0 22 % where S is the seed value. A specialised data structure (rather than
Daniel@0 23 % GMM) is used for the mixture model outputs to improve the efficiency
Daniel@0 24 % of error and gradient calculations in network training. The fields
Daniel@0 25 % are described in MDNFWD where they are set up.
Daniel@0 26 %
Daniel@0 27 % The fields in NET are
Daniel@0 28 %
Daniel@0 29 % type = 'mdn'
Daniel@0 30 % nin = number of input variables
Daniel@0 31 % nout = dimension of target space (not number of network outputs)
Daniel@0 32 % nwts = total number of weights and biases
Daniel@0 33 % mdnmixes = data structure for mixture model output
Daniel@0 34 % mlp = data structure for MLP network
Daniel@0 35 %
Daniel@0 36 % See also
Daniel@0 37 % MDNFWD, MDNERR, MDN2GMM, MDNGRAD, MDNPAK, MDNUNPAK, MLP
Daniel@0 38 %
Daniel@0 39
Daniel@0 40 % Copyright (c) Ian T Nabney (1996-2001)
Daniel@0 41 % David J Evans (1998)
Daniel@0 42
Daniel@0 43 % Currently ignore type argument: reserved for future use
Daniel@0 44 net.type = 'mdn';
Daniel@0 45
Daniel@0 46 % Set up the mixture model part of the structure
Daniel@0 47 % For efficiency we use a specialised data structure in place of GMM
Daniel@0 48 mdnmixes.type = 'mdnmixes';
Daniel@0 49 mdnmixes.ncentres = ncentres;
Daniel@0 50 mdnmixes.dim_target = dim_target;
Daniel@0 51
Daniel@0 52 % This calculation depends on spherical variances
Daniel@0 53 mdnmixes.nparams = ncentres + ncentres*dim_target + ncentres;
Daniel@0 54
Daniel@0 55 % Make the weights in the mdnmixes structure null
Daniel@0 56 mdnmixes.mixcoeffs = [];
Daniel@0 57 mdnmixes.centres = [];
Daniel@0 58 mdnmixes.covars = [];
Daniel@0 59
Daniel@0 60 % Number of output nodes = number of parameters in mixture model
Daniel@0 61 nout = mdnmixes.nparams;
Daniel@0 62
Daniel@0 63 % Set up the MLP part of the network
Daniel@0 64 if (nargin == 5)
Daniel@0 65 mlpnet = mlp(nin, nhidden, nout, 'linear');
Daniel@0 66 elseif (nargin == 6)
Daniel@0 67 mlpnet = mlp(nin, nhidden, nout, 'linear', prior);
Daniel@0 68 elseif (nargin == 7)
Daniel@0 69 mlpnet = mlp(nin, nhidden, nout, 'linear', prior, beta);
Daniel@0 70 end
Daniel@0 71
Daniel@0 72 % Create descriptor
Daniel@0 73 net.mdnmixes = mdnmixes;
Daniel@0 74 net.mlp = mlpnet;
Daniel@0 75 net.nin = nin;
Daniel@0 76 net.nout = dim_target;
Daniel@0 77 net.nwts = mlpnet.nwts;