wolffd@0: wolffd@0: How to use BNT for DBNs wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0: Documentation last updated on 7 June 2004 wolffd@0: wolffd@0:

How to use BNT for DBNs

wolffd@0: wolffd@0:

wolffd@0:

wolffd@0: wolffd@0: Note: wolffd@0: you are recommended to read an introduction wolffd@0: to DBNs first, such as wolffd@0: wolffd@0: this book chapter. wolffd@0:
wolffd@0: You may also want to consider using wolffd@0: GMTk, which is wolffd@0: an excellent C++ package for DBNs. wolffd@0: wolffd@0: wolffd@0:

Model specification

wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0: Dynamic Bayesian Networks (DBNs) are directed graphical models of stochastic wolffd@0: processes. wolffd@0: They generalise hidden Markov models (HMMs) wolffd@0: and linear dynamical systems (LDSs) wolffd@0: by representing the hidden (and observed) state in terms of state wolffd@0: variables, which can have complex interdependencies. wolffd@0: The graphical structure provides an easy way to specify these wolffd@0: conditional independencies, and hence to provide a compact wolffd@0: parameterization of the model. wolffd@0:

wolffd@0: Note that "temporal Bayesian network" would be a better name than wolffd@0: "dynamic Bayesian network", since wolffd@0: it is assumed that the model structure does not change, but wolffd@0: the term DBN has become entrenched. wolffd@0: We also normally assume that the parameters do not wolffd@0: change, i.e., the model is time-invariant. wolffd@0: However, we can always add extra wolffd@0: hidden nodes to represent the current "regime", thereby creating wolffd@0: mixtures of models to capture periodic non-stationarities. wolffd@0:

wolffd@0: There are some cases where the size of the state space can change over wolffd@0: time, e.g., tracking a variable, but unknown, number of objects. wolffd@0: In this case, we need to change the model structure over time. wolffd@0: BNT does not support this. wolffd@0: wolffd@0: wolffd@0: wolffd@0:

Hidden Markov Models (HMMs)

wolffd@0: wolffd@0: The simplest kind of DBN is a Hidden Markov Model (HMM), which has wolffd@0: one discrete hidden node and one discrete or continuous wolffd@0: observed node per slice. We illustrate this below. wolffd@0: As before, circles denote continuous nodes, squares denote wolffd@0: discrete nodes, clear means hidden, shaded means observed. wolffd@0: wolffd@0:

wolffd@0: wolffd@0:

wolffd@0: We have "unrolled" the model for three "time slices" -- the structure and parameters are wolffd@0: assumed to repeat as the model is unrolled further. wolffd@0: Hence to specify a DBN, we need to wolffd@0: define the intra-slice topology (within a slice), wolffd@0: the inter-slice topology (between two slices), wolffd@0: as well as the parameters for the first two slices. wolffd@0: (Such a two-slice temporal Bayes net is often called a 2TBN.) wolffd@0:

wolffd@0: We can specify the topology as follows. wolffd@0:

wolffd@0: intra = zeros(2);
wolffd@0: intra(1,2) = 1; % node 1 in slice t connects to node 2 in slice t
wolffd@0: 
wolffd@0: inter = zeros(2);
wolffd@0: inter(1,1) = 1; % node 1 in slice t-1 connects to node 1 in slice t
wolffd@0: 
wolffd@0: We can specify the parameters as follows, wolffd@0: where for simplicity we assume the observed node is discrete. wolffd@0:
wolffd@0: Q = 2; % num hidden states
wolffd@0: O = 2; % num observable symbols
wolffd@0: 
wolffd@0: ns = [Q O];
wolffd@0: dnodes = 1:2;
wolffd@0: bnet = mk_dbn(intra, inter, ns, 'discrete', dnodes);
wolffd@0: for i=1:4
wolffd@0:   bnet.CPD{i} = tabular_CPD(bnet, i);
wolffd@0: end
wolffd@0: 
wolffd@0:

wolffd@0: We assume the distributions P(X(t) | X(t-1)) and wolffd@0: P(Y(t) | X(t)) are independent of t for t > 1. wolffd@0: Hence the CPD for nodes 5, 7, ... is the same as for node 3, so we say they wolffd@0: are in the same equivalence class, with node 3 being the "representative" wolffd@0: for this class. In other words, we have tied the parameters for nodes wolffd@0: 3, 5, 7, ... wolffd@0: Similarly, nodes 4, 6, 8, ... are tied. wolffd@0: Note, however, that (the parameters for) nodes 1 and 2 are not tied to wolffd@0: subsequent slices. wolffd@0:

wolffd@0: Above we assumed the observation model P(Y(t) | X(t)) is independent of t for t>1, but wolffd@0: it is conventional to assume this is true for all t. wolffd@0: So we would like to put nodes 2, 4, 6, ... all in the same class. wolffd@0: We can do this by explicitely defining the equivalence classes, as wolffd@0: follows (see here for more details on wolffd@0: parameter tying). wolffd@0:

wolffd@0: We define eclass1(i) to be the equivalence class that node i in slice wolffd@0: 1 belongs to. wolffd@0: Similarly, we define eclass2(i) to be the equivalence class that node i in slice wolffd@0: 2, 3, ..., belongs to. wolffd@0: For an HMM, we have wolffd@0:

wolffd@0: eclass1 = [1 2];
wolffd@0: eclass2 = [3 2];
wolffd@0: eclass = [eclass1 eclass2];
wolffd@0: 
wolffd@0: This ties the observation model across slices, wolffd@0: since e.g., eclass(4) = eclass(2) = 2. wolffd@0:

wolffd@0: By default, wolffd@0: eclass1 = 1:ss, and eclass2 = (1:ss)+ss, where ss = slice size = the wolffd@0: number of nodes per slice. wolffd@0: wolffd@0: But by using the above tieing pattern, wolffd@0: we now only have 3 CPDs to specify, instead of 4: wolffd@0:

wolffd@0: bnet = mk_dbn(intra, inter, ns, 'discrete', dnodes, 'eclass1', eclass1, 'eclass2', eclass2);
wolffd@0: prior0 = normalise(rand(Q,1));
wolffd@0: transmat0 = mk_stochastic(rand(Q,Q));
wolffd@0: obsmat0 = mk_stochastic(rand(Q,O));
wolffd@0: bnet.CPD{1} = tabular_CPD(bnet, 1, prior0);
wolffd@0: bnet.CPD{2} = tabular_CPD(bnet, 2, obsmat0);
wolffd@0: bnet.CPD{3} = tabular_CPD(bnet, 3, transmat0);
wolffd@0: 
wolffd@0: We discuss how to do inference and learning on this model wolffd@0: below. wolffd@0: (See also wolffd@0: my HMM toolbox, which is included with BNT.) wolffd@0: wolffd@0:

wolffd@0: Some common variants on HMMs are shown below. wolffd@0: BNT can handle all of these. wolffd@0:

wolffd@0:

wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0:
wolffd@0: wolffd@0:
wolffd@0: wolffd@0:
wolffd@0:
wolffd@0: wolffd@0: wolffd@0: wolffd@0:

Linear Dynamical Systems (LDSs) and Kalman filters

wolffd@0: wolffd@0: A Linear Dynamical System (LDS) has the same topology as an HMM, but wolffd@0: all the nodes are assumed to have linear-Gaussian distributions, i.e., wolffd@0:
wolffd@0:    x(t+1) = A*x(t) + w(t),  w ~ N(0, Q),  x(0) ~ N(init_x, init_V)
wolffd@0:    y(t)   = C*x(t) + v(t),  v ~ N(0, R)
wolffd@0: 
wolffd@0: Some simple variants are shown below. wolffd@0:

wolffd@0:

wolffd@0: wolffd@0: wolffd@0:
wolffd@0: wolffd@0: wolffd@0: wolffd@0:
wolffd@0:
wolffd@0:

wolffd@0: wolffd@0: We can create a regular LDS in BNT as follows. wolffd@0:

wolffd@0: 
wolffd@0: intra = zeros(2);
wolffd@0: intra(1,2) = 1;
wolffd@0: inter = zeros(2);
wolffd@0: inter(1,1) = 1;
wolffd@0: n = 2;
wolffd@0: 
wolffd@0: X = 2; % size of hidden state
wolffd@0: Y = 2; % size of observable state
wolffd@0: 
wolffd@0: ns = [X Y];
wolffd@0: dnodes = [];
wolffd@0: onodes = [2];
wolffd@0: eclass1 = [1 2];
wolffd@0: eclass2 = [3 2];
wolffd@0: bnet = mk_dbn(intra, inter, ns, 'discrete', dnodes, 'eclass1', eclass1, 'eclass2', eclass2);
wolffd@0: 
wolffd@0: x0 = rand(X,1);
wolffd@0: V0 = eye(X); % must be positive semi definite!
wolffd@0: C0 = rand(Y,X);
wolffd@0: R0 = eye(Y);
wolffd@0: A0 = rand(X,X);
wolffd@0: Q0 = eye(X);
wolffd@0: 
wolffd@0: bnet.CPD{1} = gaussian_CPD(bnet, 1, 'mean', x0, 'cov', V0, 'cov_prior_weight', 0);
wolffd@0: bnet.CPD{2} = gaussian_CPD(bnet, 2, 'mean', zeros(Y,1), 'cov', R0, 'weights', C0, ...
wolffd@0: 			   'clamp_mean', 1, 'cov_prior_weight', 0);
wolffd@0: bnet.CPD{3} = gaussian_CPD(bnet, 3, 'mean', zeros(X,1), 'cov', Q0, 'weights', A0, ...
wolffd@0: 			   'clamp_mean', 1, 'cov_prior_weight', 0);
wolffd@0: 
wolffd@0: We discuss how to do
inference and learning on this model wolffd@0: below. wolffd@0: (See also wolffd@0: my Kalman filter toolbox, which is included with BNT.) wolffd@0:

wolffd@0: wolffd@0: wolffd@0:

Coupled HMMs

wolffd@0: wolffd@0: Here is an example of a coupled HMM with N=5 chains, unrolled for T=3 wolffd@0: slices. Each hidden discrete node has a private observed Gaussian wolffd@0: child. wolffd@0:

wolffd@0: wolffd@0:

wolffd@0: We can make this using the function wolffd@0:

wolffd@0: Q = 2; % binary hidden nodes
wolffd@0: discrete_obs = 0; % cts observed nodes
wolffd@0: Y = 1; % scalar observed nodes
wolffd@0: bnet = mk_chmm(N, Q, Y, discrete_obs);
wolffd@0: 
wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0:

Water network

wolffd@0: wolffd@0: Consider the following model wolffd@0: of a water purification plant, developed wolffd@0: by Finn V. Jensen, Uffe Kjærulff, Kristian G. Olesen, and Jan wolffd@0: Pedersen. wolffd@0: wolffd@0: wolffd@0: wolffd@0:

wolffd@0:

wolffd@0: wolffd@0:
wolffd@0: We now show how to specify this model in BNT. wolffd@0:
wolffd@0: ss = 12; % slice size
wolffd@0: intra = zeros(ss);
wolffd@0: intra(1,9) = 1;
wolffd@0: intra(3,10) = 1;
wolffd@0: intra(4,11) = 1;
wolffd@0: intra(8,12) = 1;
wolffd@0: 
wolffd@0: inter = zeros(ss);
wolffd@0: inter(1, [1 3]) = 1; % node 1 in slice 1 connects to nodes 1 and 3 in slice 2
wolffd@0: inter(2, [2 3 7]) = 1;
wolffd@0: inter(3, [3 4 5]) = 1;
wolffd@0: inter(4, [3 4 6]) = 1;
wolffd@0: inter(5, [3 5 6]) = 1;
wolffd@0: inter(6, [4 5 6]) = 1;
wolffd@0: inter(7, [7 8]) = 1;
wolffd@0: inter(8, [6 7 8]) = 1;
wolffd@0: 
wolffd@0: onodes = 9:12; % observed
wolffd@0: dnodes = 1:ss; % discrete
wolffd@0: ns = 2*ones(1,ss); % binary nodes
wolffd@0: eclass1 = 1:12;
wolffd@0: eclass2 = [13:20 9:12];
wolffd@0: eclass = [eclass1 eclass2];
wolffd@0: bnet = mk_dbn(intra, inter, ns, 'discrete', dnodes, 'eclass1', eclass1, 'eclass2', eclass2);
wolffd@0: for e=1:max(eclass)
wolffd@0:   bnet.CPD{e} = tabular_CPD(bnet, e);
wolffd@0: end
wolffd@0: 
wolffd@0: We have tied the observation parameters across all slices. wolffd@0: Click
here for a more complex example wolffd@0: of parameter tieing. wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0:

BATnet

wolffd@0: wolffd@0: As an example of a more complicated DBN, consider the following wolffd@0: example, wolffd@0: which is a model of a car's high level state, as might be used by wolffd@0: an automated car. wolffd@0: (The model is from Forbes, Huang, Kanazawa and Russell, "The BATmobile: Towards a wolffd@0: Bayesian Automated Taxi", IJCAI 95. The figure is from wolffd@0: Boyen and Koller, "Tractable Inference for Complex Stochastic wolffd@0: Processes", UAI98. wolffd@0: For simplicity, we only show the observed nodes for slice 2.) wolffd@0:

wolffd@0:

wolffd@0: wolffd@0:
wolffd@0:

wolffd@0: Since this topology is so complicated, wolffd@0: it is useful to be able to refer to the nodes by name, instead of wolffd@0: number. wolffd@0:

wolffd@0: names = {'LeftClr', 'RightClr', 'LatAct', ... 'Bclr', 'BYdotDiff'};
wolffd@0: ss = length(names);
wolffd@0: 
wolffd@0: We can specify the intra-slice topology using a cell array as follows, wolffd@0: where each row specifies a connection between two named nodes: wolffd@0:
wolffd@0: intrac = {...
wolffd@0:    'LeftClr', 'LeftClrSens';
wolffd@0:   'RightClr', 'RightClrSens';
wolffd@0:   ...
wolffd@0:   'BYdotDiff', 'BcloseFast'};
wolffd@0: 
wolffd@0: Finally, we can convert this cell array to an adjacency matrix using wolffd@0: the following function: wolffd@0:
wolffd@0: [intra, names] = mk_adj_mat(intrac, names, 1);
wolffd@0: 
wolffd@0: This function also permutes the names so that they are in topological wolffd@0: order. wolffd@0: Given this ordering of the names, we can make the inter-slice wolffd@0: connectivity matrix as follows: wolffd@0:
wolffd@0: interc = {...
wolffd@0:    'LeftClr', 'LeftClr';
wolffd@0:    'LeftClr', 'LatAct';
wolffd@0:    ...
wolffd@0:    'FBStatus', 'LatAct'};
wolffd@0: 
wolffd@0: inter = mk_adj_mat(interc, names, 0);  
wolffd@0: 
wolffd@0: wolffd@0: To refer to a node, we must know its number, which can be computed as wolffd@0: in the following example: wolffd@0:
wolffd@0: obs = {'LeftClrSens', 'RightClrSens', 'TurnSignalSens', 'XdotSens', 'YdotSens', 'FYdotDiffSens', ...
wolffd@0:       'FclrSens', 'BXdotSens', 'BclrSens', 'BYdotDiffSens'};
wolffd@0: for i=1:length(obs)
wolffd@0:   onodes(i) = strmatch(obs{i}, names);
wolffd@0: end
wolffd@0: onodes = sort(onodes);
wolffd@0: 
wolffd@0: (We sort the onodes since most BNT routines assume that set-valued wolffd@0: arguments are in sorted order.) wolffd@0: We can now make the DBN: wolffd@0:
wolffd@0: dnodes = 1:ss; 
wolffd@0: ns = 2*ones(1,ss); % binary nodes
wolffd@0: bnet = mk_dbn(intra, inter, ns, 'iscrete', dnodes);
wolffd@0: 
wolffd@0: To specify the parameters, we must know the order of the parents. wolffd@0: See the function BNT/general/mk_named_CPT for a way to do this in the wolffd@0: case of tabular nodes. For simplicity, we just generate random wolffd@0: parameters: wolffd@0:
wolffd@0: for i=1:2*ss
wolffd@0:   bnet.CPD{i} = tabular_CPD(bnet, i);
wolffd@0: end
wolffd@0: 
wolffd@0: A complete version of this example is available in BNT/examples/dynamic/bat1.m. wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0:

Inference

wolffd@0: wolffd@0: wolffd@0: The general inference problem for DBNs is to compute wolffd@0: P(X(i,t0) | Y(:, t1:t2)), where X(i,t) represents the i'th hidden wolffd@0: variable at time t and Y(:,t1:t2) represents all the evidence wolffd@0: between times t1 and t2. wolffd@0: There are several special cases of interest, illustrated below. wolffd@0: The arrow indicates t0: it is X(t0) that we are trying to estimate. wolffd@0: The shaded region denotes t1:t2, the available data. wolffd@0:

wolffd@0: wolffd@0: wolffd@0: wolffd@0:

wolffd@0: BNT can currently only handle offline smoothing. wolffd@0: (The HMM engine handles filtering and, to a limited extent, prediction.) wolffd@0: The usage is similar to static wolffd@0: inference engines, except now the evidence is a 2D cell array of wolffd@0: size ss*T, where ss is the number of nodes per slice (ss = slice sizee) and T is the wolffd@0: number of slices. wolffd@0: Also, 'marginal_nodes' takes two arguments, the nodes and the time-slice. wolffd@0: For example, to compute P(X(i,t) | y(:,1:T)), we proceed as follows wolffd@0: (where onodes are the indices of the observedd nodes in each slice, wolffd@0: which correspond to y): wolffd@0:

wolffd@0: ev = sample_dbn(bnet, T);
wolffd@0: evidence = cell(ss,T);
wolffd@0: evidence(onodes,:) = ev(onodes, :); % all cells besides onodes are empty
wolffd@0: [engine, ll] = enter_evidence(engine, evidence);
wolffd@0: marg = marginal_nodes(engine, i, t);
wolffd@0: 
wolffd@0: wolffd@0: wolffd@0:

Discrete hidden nodes

wolffd@0: wolffd@0: If all the hidden nodes are discrete, wolffd@0: we can use the junction tree algorithm to perform inference. wolffd@0: The simplest approach, wolffd@0: jtree_unrolled_dbn_inf_engine, wolffd@0: unrolls the DBN into a static network and applies jtree; however, for wolffd@0: long sequences, this wolffd@0: can be very slow and can result in numerical underflow. wolffd@0: A better approach is to apply the jtree algorithm to pairs of wolffd@0: neighboring slices at a time; this is implemented in wolffd@0: jtree_dbn_inf_engine. wolffd@0: wolffd@0:

wolffd@0: A DBN can be converted to an HMM if all the hidden nodes are discrete. wolffd@0: In this case, you can use wolffd@0: hmm_inf_engine. This is faster than jtree for small models wolffd@0: because the constant factors of the algorithm are lower, but can be wolffd@0: exponentially slower for models with many variables wolffd@0: (e.g., > 6 binary hidden nodes). wolffd@0: wolffd@0:

wolffd@0: The use of both wolffd@0: jtree_dbn_inf_engine wolffd@0: and wolffd@0: hmm_inf_engine wolffd@0: is deprecated. wolffd@0: A better approach is to construct a smoother engine out of lower-level wolffd@0: engines, which implement forward/backward operators. wolffd@0: You can create these engines as follows. wolffd@0:

wolffd@0: engine = smoother_engine(hmm_2TBN_inf_engine(bnet));
wolffd@0: or
wolffd@0: engine = smoother_engine(jtree_2TBN_inf_engine(bnet));
wolffd@0: 
wolffd@0: You then call them in the usual way: wolffd@0:
wolffd@0: engine = enter_evidence(engine, evidence);
wolffd@0: m = marginal_nodes(engine, nodes, t);
wolffd@0: 
wolffd@0: Note: you must declare the observed nodes in the bnet before using wolffd@0: hmm_2TBN_inf_engine. wolffd@0: wolffd@0: wolffd@0:

wolffd@0: Unfortunately, when all the hiddden nodes are discrete, wolffd@0: exact inference takes O(2^n) time, where n is the number of hidden wolffd@0: nodes per slice, wolffd@0: even if the model is sparse. wolffd@0: The basic reason for this is that two nodes become correlated, even if wolffd@0: there is no direct connection between them in the 2TBN, wolffd@0: by virtue of sharing common ancestors in the past. wolffd@0: Hence we need to use approximations. wolffd@0:

wolffd@0: A popular approximate inference algorithm for discrete DBNs, known as BK, is described in wolffd@0:

wolffd@0: This approximates the belief state with a product of wolffd@0: marginals on a specified set of clusters. For example, wolffd@0: in the water network, we might use the following clusters: wolffd@0:
wolffd@0: engine = bk_inf_engine(bnet, { [1 2], [3 4 5 6], [7 8] });
wolffd@0: 
wolffd@0: This engine can now be used just like the jtree engine. wolffd@0: Two special cases of the BK algorithm are supported: 'ff' (fully wolffd@0: factored) means each node has its own cluster, and 'exact' means there wolffd@0: is 1 cluster that contains the whole slice. These can be created as wolffd@0: follows: wolffd@0:
wolffd@0: engine = bk_inf_engine(bnet, 'ff');
wolffd@0: engine = bk_inf_engine(bnet, 'exact');
wolffd@0: 
wolffd@0: For pedagogical purposes, an implementation of BK-FF that uses an HMM wolffd@0: instead of junction tree is available at wolffd@0: bk_ff_hmm_inf_engine. wolffd@0: wolffd@0: wolffd@0: wolffd@0:

Continuous hidden nodes

wolffd@0: wolffd@0: If all the hidden nodes are linear-Gaussian, and the observed nodes are wolffd@0: linear-Gaussian, wolffd@0: the model is a wolffd@0: linear dynamical system (LDS). wolffd@0: A DBN can be converted to an LDS if all the hidden nodes are linear-Gaussian wolffd@0: and if they are all persistent. In this case, you can use wolffd@0: kalman_inf_engine. wolffd@0: For more general linear-gaussian models, you can use wolffd@0: jtree_dbn_inf_engine or jtree_unrolled_dbn_inf_engine. wolffd@0: wolffd@0:

wolffd@0: For nonlinear systems with Gaussian noise, the unscented Kalman filter (UKF), wolffd@0: due to Julier and Uhlmann, is far superior to the well-known extended Kalman wolffd@0: filter (EKF), both in theory and practice. wolffd@0: wolffd@0: The key idea of the UKF is that it is easier to estimate a Gaussian distribution wolffd@0: from a set of points than to approximate an arbitrary non-linear wolffd@0: function. wolffd@0: We start with points that are plus/minus sigma away from the mean along wolffd@0: each dimension, and then pipe them through the nonlinearity, and wolffd@0: then fit a Gaussian to the transformed points. wolffd@0: (No need to compute Jacobians, unlike the EKF!) wolffd@0: wolffd@0:

wolffd@0: For systems with non-Gaussian noise, I recommend wolffd@0: Particle wolffd@0: filtering (PF), which is a popular sequential Monte Carlo technique. wolffd@0: wolffd@0:

wolffd@0: The EKF can be used as a proposal distribution for a PF. wolffd@0: This method is better than either one alone. wolffd@0: See The Unscented Particle Filter, wolffd@0: by R van der Merwe, A Doucet, JFG de Freitas and E Wan, May 2000. wolffd@0: Matlab wolffd@0: software for the UPF is also available. wolffd@0:

wolffd@0: Note: none of this software is part of BNT. wolffd@0: wolffd@0: wolffd@0: wolffd@0:

Learning

wolffd@0: wolffd@0: Learning in DBNs can be done online or offline. wolffd@0: Currently only offline learning is implemented in BNT. wolffd@0: wolffd@0: wolffd@0:

Parameter learning

wolffd@0: wolffd@0: Offline parameter learning is very similar to learning in static networks, wolffd@0: except now the training data is a cell-array of 2D cell-arrays. wolffd@0: For example, wolffd@0: cases{l}{i,t} is the value of node i in slice t in sequence l, or [] wolffd@0: if unobserved. wolffd@0: Each sequence can be a different length, and may have missing values wolffd@0: in arbitrary locations. wolffd@0: Here is a typical code fragment for using EM. wolffd@0:
wolffd@0: ncases = 2;
wolffd@0: cases = cell(1, ncases);
wolffd@0: for i=1:ncases
wolffd@0:   ev = sample_dbn(bnet, T);
wolffd@0:   cases{i} = cell(ss,T);
wolffd@0:   cases{i}(onodes,:) = ev(onodes, :);
wolffd@0: end
wolffd@0: [bnet2, LLtrace] = learn_params_dbn_em(engine, cases, 'max_iter', 10);
wolffd@0: 
wolffd@0: If the observed node is vector-valued and stored in an OxT array, you wolffd@0: need to assign each vector to a single cell, as in the following wolffd@0: example. wolffd@0:
wolffd@0: data = [xpos(:)'; ypos(:)']; 
wolffd@0: ncases = 1;
wolffd@0: cases = cell(1, ncases);
wolffd@0: onodes = bnet.observed;
wolffd@0: for i=1:ncases
wolffd@0:   cases{i} = cell(ss,T);
wolffd@0:   cases{i}(onodes,:) = num2cell(data(:,1:T), 1);
wolffd@0: end
wolffd@0: 
wolffd@0:

wolffd@0: For a complete code listing of how to do EM in a simple DBN, click wolffd@0: here. wolffd@0: wolffd@0:

Structure learning

wolffd@0: wolffd@0: There is currently only one structure learning algorithm for DBNs. wolffd@0: This assumes all nodes are tabular and observed, and that there are wolffd@0: no intra-slice connections. Hence we can find the optimal set of wolffd@0: parents for each node separately, without worrying about directed wolffd@0: cycles or node orderings. wolffd@0: The function is called as follows wolffd@0:
wolffd@0: inter = learn_struct_dbn_reveal(cases, ns, max_fan_in, penalty)
wolffd@0: 
wolffd@0: A full example is given in BNT/examples/dynamic/reveal1.m. wolffd@0: Setting the penalty term to 0 gives the maximum likelihood model; this wolffd@0: is equivalent to maximizing the mutual information between parents and wolffd@0: child (in the bioinformatics community, this is known as the REVEAL wolffd@0: algorithm). A non-zero penalty invokes the BIC criterion, which wolffd@0: lessens the chance of overfitting. wolffd@0:

wolffd@0: wolffd@0: Dirk Husmeier has extended MCMC model selection to DBNs. wolffd@0: wolffd@0: