annotate toolboxes/FullBNT-1.0.7/docs/usage_dbn_02nov13.html @ 0:cc4b1211e677 tip

initial commit to HG from Changeset: 646 (e263d8a21543) added further path and more save "camirversion.m"
author Daniel Wolff
date Fri, 19 Aug 2016 13:07:06 +0200
parents
children
rev   line source
Daniel@0 1 <HEAD>
Daniel@0 2 <TITLE>How to use BNT for DBNs</TITLE>
Daniel@0 3 </HEAD>
Daniel@0 4
Daniel@0 5 <BODY BGCOLOR="#FFFFFF">
Daniel@0 6 <!-- white background is better for the pictures and equations -->
Daniel@0 7
Daniel@0 8 Documentation last updated on 13 November 2002
Daniel@0 9
Daniel@0 10 <h1>How to use BNT for DBNs</h1>
Daniel@0 11
Daniel@0 12 <p>
Daniel@0 13 <ul>
Daniel@0 14 <li> <a href="#spec">Model specification</a>
Daniel@0 15 <ul>
Daniel@0 16 <li> <a href="#hmm">HMMs</a>
Daniel@0 17 <li> <a href="#lds">Kalman filters</a>
Daniel@0 18 <li> <a href="#chmm">Coupled HMMs</a>
Daniel@0 19 <li> <a href="#water">Water network</a>
Daniel@0 20 <li> <a href="#bat">BAT network</a>
Daniel@0 21 </ul>
Daniel@0 22
Daniel@0 23 <li> <a href="#inf">Inference</a>
Daniel@0 24 <ul>
Daniel@0 25 <li> <a href="#discrete">Discrete hidden nodes</a>
Daniel@0 26 <li> <a href="#cts">Continuous hidden nodes</a>
Daniel@0 27 </ul>
Daniel@0 28
Daniel@0 29 <li> <a href="#learn">Learning</a>
Daniel@0 30 <ul>
Daniel@0 31 <li> <a href="#param_learn">Parameter learning</a>
Daniel@0 32 <li> <a href="#struct_learn">Structure learning</a>
Daniel@0 33 </ul>
Daniel@0 34
Daniel@0 35 </ul>
Daniel@0 36
Daniel@0 37 Note:
Daniel@0 38 you are recommended to read an introduction
Daniel@0 39 to DBNs first, such as
Daniel@0 40 <a href="http://www.ai.mit.edu/~murphyk/Papers/dbnchapter.pdf">
Daniel@0 41 this book chapter</a>.
Daniel@0 42
Daniel@0 43
Daniel@0 44 <h1><a name="spec">Model specification</h1>
Daniel@0 45
Daniel@0 46
Daniel@0 47 <!--<h1><a name="dbn_intro">Dynamic Bayesian Networks (DBNs)</h1>-->
Daniel@0 48
Daniel@0 49 Dynamic Bayesian Networks (DBNs) are directed graphical models of stochastic
Daniel@0 50 processes.
Daniel@0 51 They generalise <a href="#hmm">hidden Markov models (HMMs)</a>
Daniel@0 52 and <a href="#lds">linear dynamical systems (LDSs)</a>
Daniel@0 53 by representing the hidden (and observed) state in terms of state
Daniel@0 54 variables, which can have complex interdependencies.
Daniel@0 55 The graphical structure provides an easy way to specify these
Daniel@0 56 conditional independencies, and hence to provide a compact
Daniel@0 57 parameterization of the model.
Daniel@0 58 <p>
Daniel@0 59 Note that "temporal Bayesian network" would be a better name than
Daniel@0 60 "dynamic Bayesian network", since
Daniel@0 61 it is assumed that the model structure does not change, but
Daniel@0 62 the term DBN has become entrenched.
Daniel@0 63 We also normally assume that the parameters do not
Daniel@0 64 change, i.e., the model is time-invariant.
Daniel@0 65 However, we can always add extra
Daniel@0 66 hidden nodes to represent the current "regime", thereby creating
Daniel@0 67 mixtures of models to capture periodic non-stationarities.
Daniel@0 68 <p>
Daniel@0 69 There are some cases where the size of the state space can change over
Daniel@0 70 time, e.g., tracking a variable, but unknown, number of objects.
Daniel@0 71 In this case, we need to change the model structure over time.
Daniel@0 72 BNT does not support this.
Daniel@0 73 <!--
Daniel@0 74 , but see the following paper for a
Daniel@0 75 discussion of some of the issues:
Daniel@0 76 <ul>
Daniel@0 77 <li> <a href="ftp://ftp.cs.monash.edu.au/pub/annn/smc.ps">
Daniel@0 78 Dynamic belief networks for discrete monitoring</a>,
Daniel@0 79 A. E. Nicholson and J. M. Brady.
Daniel@0 80 IEEE Systems, Man and Cybernetics, 24(11):1593-1610, 1994.
Daniel@0 81 </ul>
Daniel@0 82 -->
Daniel@0 83
Daniel@0 84
Daniel@0 85 <h2><a name="hmm">Hidden Markov Models (HMMs)</h2>
Daniel@0 86
Daniel@0 87 The simplest kind of DBN is a Hidden Markov Model (HMM), which has
Daniel@0 88 one discrete hidden node and one discrete or continuous
Daniel@0 89 observed node per slice. We illustrate this below.
Daniel@0 90 As before, circles denote continuous nodes, squares denote
Daniel@0 91 discrete nodes, clear means hidden, shaded means observed.
Daniel@0 92 <!--
Daniel@0 93 (The observed nodes can be
Daniel@0 94 discrete or continuous; the crucial thing about an HMM is that the
Daniel@0 95 hidden nodes are discrete, so the system can model arbitrary dynamics
Daniel@0 96 -- providing, of course, that the hidden state space is large enough.)
Daniel@0 97 -->
Daniel@0 98 <p>
Daniel@0 99 <img src="Figures/hmm3.gif">
Daniel@0 100 <p>
Daniel@0 101 We have "unrolled" the model for three "time slices" -- the structure and parameters are
Daniel@0 102 assumed to repeat as the model is unrolled further.
Daniel@0 103 Hence to specify a DBN, we need to
Daniel@0 104 define the intra-slice topology (within a slice),
Daniel@0 105 the inter-slice topology (between two slices),
Daniel@0 106 as well as the parameters for the first two slices.
Daniel@0 107 (Such a two-slice temporal Bayes net is often called a 2TBN.)
Daniel@0 108 <p>
Daniel@0 109 We can specify the topology as follows.
Daniel@0 110 <PRE>
Daniel@0 111 intra = zeros(2);
Daniel@0 112 intra(1,2) = 1; % node 1 in slice t connects to node 2 in slice t
Daniel@0 113
Daniel@0 114 inter = zeros(2);
Daniel@0 115 inter(1,1) = 1; % node 1 in slice t-1 connects to node 1 in slice t
Daniel@0 116 </pre>
Daniel@0 117 We can specify the parameters as follows,
Daniel@0 118 where for simplicity we assume the observed node is discrete.
Daniel@0 119 <pre>
Daniel@0 120 Q = 2; % num hidden states
Daniel@0 121 O = 2; % num observable symbols
Daniel@0 122
Daniel@0 123 ns = [Q O];
Daniel@0 124 dnodes = 1:2;
Daniel@0 125 bnet = mk_dbn(intra, inter, ns, 'discrete', dnodes);
Daniel@0 126 for i=1:4
Daniel@0 127 bnet.CPD{i} = tabular_CPD(bnet, i);
Daniel@0 128 end
Daniel@0 129 </pre>
Daniel@0 130 <p>
Daniel@0 131 We assume the distributions P(X(t) | X(t-1)) and
Daniel@0 132 P(Y(t) | X(t)) are independent of t for t > 1.
Daniel@0 133 Hence the CPD for nodes 5, 7, ... is the same as for node 3, so we say they
Daniel@0 134 are in the same equivalence class, with node 3 being the "representative"
Daniel@0 135 for this class. In other words, we have tied the parameters for nodes
Daniel@0 136 3, 5, 7, ...
Daniel@0 137 Similarly, nodes 4, 6, 8, ... are tied.
Daniel@0 138 Note, however, that (the parameters for) nodes 1 and 2 are not tied to
Daniel@0 139 subsequent slices.
Daniel@0 140 <p>
Daniel@0 141 Above we assumed the observation model P(Y(t) | X(t)) is independent of t for t>1, but
Daniel@0 142 it is conventional to assume this is true for all t.
Daniel@0 143 So we would like to put nodes 2, 4, 6, ... all in the same class.
Daniel@0 144 We can do this by explicitely defining the equivalence classes, as
Daniel@0 145 follows (see <a href="usage.html#tying">here</a> for more details on
Daniel@0 146 parameter tying).
Daniel@0 147 <p>
Daniel@0 148 We define eclass1(i) to be the equivalence class that node i in slice
Daniel@0 149 1 belongs to.
Daniel@0 150 Similarly, we define eclass2(i) to be the equivalence class that node i in slice
Daniel@0 151 2, 3, ..., belongs to.
Daniel@0 152 For an HMM, we have
Daniel@0 153 <pre>
Daniel@0 154 eclass1 = [1 2];
Daniel@0 155 eclass2 = [3 2];
Daniel@0 156 eclass = [eclass1 eclass2];
Daniel@0 157 </pre>
Daniel@0 158 This ties the observation model across slices,
Daniel@0 159 since e.g., eclass(4) = eclass(2) = 2.
Daniel@0 160 <p>
Daniel@0 161 By default,
Daniel@0 162 eclass1 = 1:ss, and eclass2 = (1:ss)+ss, where ss = slice size = the
Daniel@0 163 number of nodes per slice.
Daniel@0 164 <!--This will tie nodes in slices 3, 4, ... to the the nodes in slice 2,
Daniel@0 165 but none of the nodes in slice 2 to any in slice 1.-->
Daniel@0 166 But by using the above tieing pattern,
Daniel@0 167 we now only have 3 CPDs to specify, instead of 4:
Daniel@0 168 <pre>
Daniel@0 169 bnet = mk_dbn(intra, inter, ns, 'discrete', dnodes, 'eclass1', eclass1, 'eclass2', eclass2);
Daniel@0 170 prior0 = normalise(rand(Q,1));
Daniel@0 171 transmat0 = mk_stochastic(rand(Q,Q));
Daniel@0 172 obsmat0 = mk_stochastic(rand(Q,O));
Daniel@0 173 bnet.CPD{1} = tabular_CPD(bnet, 1, prior0);
Daniel@0 174 bnet.CPD{2} = tabular_CPD(bnet, 2, obsmat0);
Daniel@0 175 bnet.CPD{3} = tabular_CPD(bnet, 3, transmat0);
Daniel@0 176 </pre>
Daniel@0 177 We discuss how to do <a href="#inf">inference</a> and <a href="#learn">learning</a> on this model
Daniel@0 178 below.
Daniel@0 179 (See also
Daniel@0 180 my <a href="../HMM/hmm.html">HMM toolbox</a>, which is included with BNT.)
Daniel@0 181
Daniel@0 182 <p>
Daniel@0 183 Some common variants on HMMs are shown below.
Daniel@0 184 BNT can handle all of these.
Daniel@0 185 <p>
Daniel@0 186 <center>
Daniel@0 187 <table>
Daniel@0 188 <tr>
Daniel@0 189 <td><img src="Figures/hmm_gauss.gif">
Daniel@0 190 <td><img src="Figures/hmm_mixgauss.gif"
Daniel@0 191 <td><img src="Figures/hmm_ar.gif">
Daniel@0 192 <tr>
Daniel@0 193 <td><img src="Figures/hmm_factorial.gif">
Daniel@0 194 <td><img src="Figures/hmm_coupled.gif"
Daniel@0 195 <td><img src="Figures/hmm_io.gif">
Daniel@0 196 <tr>
Daniel@0 197 </table>
Daniel@0 198 </center>
Daniel@0 199
Daniel@0 200
Daniel@0 201
Daniel@0 202 <h2><a name="lds">Linear Dynamical Systems (LDSs) and Kalman filters</h2>
Daniel@0 203
Daniel@0 204 A Linear Dynamical System (LDS) has the same topology as an HMM, but
Daniel@0 205 all the nodes are assumed to have linear-Gaussian distributions, i.e.,
Daniel@0 206 <pre>
Daniel@0 207 x(t+1) = A*x(t) + w(t), w ~ N(0, Q), x(0) ~ N(init_x, init_V)
Daniel@0 208 y(t) = C*x(t) + v(t), v ~ N(0, R)
Daniel@0 209 </pre>
Daniel@0 210 Some simple variants are shown below.
Daniel@0 211 <p>
Daniel@0 212 <center>
Daniel@0 213 <table>
Daniel@0 214 <tr>
Daniel@0 215 <td><img src="Figures/ar1.gif">
Daniel@0 216 <td><img src="Figures/sar.gif">
Daniel@0 217 <td><img src="Figures/kf.gif">
Daniel@0 218 <td><img src="Figures/skf.gif">
Daniel@0 219 </table>
Daniel@0 220 </center>
Daniel@0 221 <p>
Daniel@0 222
Daniel@0 223 We can create a regular LDS in BNT as follows.
Daniel@0 224 <pre>
Daniel@0 225
Daniel@0 226 intra = zeros(2);
Daniel@0 227 intra(1,2) = 1;
Daniel@0 228 inter = zeros(2);
Daniel@0 229 inter(1,1) = 1;
Daniel@0 230 n = 2;
Daniel@0 231
Daniel@0 232 X = 2; % size of hidden state
Daniel@0 233 Y = 2; % size of observable state
Daniel@0 234
Daniel@0 235 ns = [X Y];
Daniel@0 236 dnodes = [];
Daniel@0 237 onodes = [2];
Daniel@0 238 eclass1 = [1 2];
Daniel@0 239 eclass2 = [3 2];
Daniel@0 240 bnet = mk_dbn(intra, inter, ns, 'discrete', dnodes, 'eclass1', eclass1, 'eclass2', eclass2);
Daniel@0 241
Daniel@0 242 x0 = rand(X,1);
Daniel@0 243 V0 = eye(X); % must be positive semi definite!
Daniel@0 244 C0 = rand(Y,X);
Daniel@0 245 R0 = eye(Y);
Daniel@0 246 A0 = rand(X,X);
Daniel@0 247 Q0 = eye(X);
Daniel@0 248
Daniel@0 249 bnet.CPD{1} = gaussian_CPD(bnet, 1, 'mean', x0, 'cov', V0, 'cov_prior_weight', 0);
Daniel@0 250 bnet.CPD{2} = gaussian_CPD(bnet, 2, 'mean', zeros(Y,1), 'cov', R0, 'weights', C0, ...
Daniel@0 251 'clamp_mean', 1, 'cov_prior_weight', 0);
Daniel@0 252 bnet.CPD{3} = gaussian_CPD(bnet, 3, 'mean', zeros(X,1), 'cov', Q0, 'weights', A0, ...
Daniel@0 253 'clamp_mean', 1, 'cov_prior_weight', 0);
Daniel@0 254 </pre>
Daniel@0 255 We discuss how to do <a href="#inf">inference</a> and <a href="#learn">learning</a> on this model
Daniel@0 256 below.
Daniel@0 257 (See also
Daniel@0 258 my <a href="../Kalman/kalman.html">Kalman filter toolbox</a>, which is included with BNT.)
Daniel@0 259 <p>
Daniel@0 260
Daniel@0 261
Daniel@0 262 <h2><a name="chmm">Coupled HMMs</h2>
Daniel@0 263
Daniel@0 264 Here is an example of a coupled HMM with N=5 chains, unrolled for T=3
Daniel@0 265 slices. Each hidden discrete node has a private observed Gaussian
Daniel@0 266 child.
Daniel@0 267 <p>
Daniel@0 268 <img src="Figures/chmm5.gif">
Daniel@0 269 <p>
Daniel@0 270 We can make this using the function
Daniel@0 271 <pre>
Daniel@0 272 Q = 2; % binary hidden nodes
Daniel@0 273 discrete_obs = 0; % cts observed nodes
Daniel@0 274 Y = 1; % scalar observed nodes
Daniel@0 275 bnet = mk_chmm(N, Q, Y, discrete_obs);
Daniel@0 276 </pre>
Daniel@0 277
Daniel@0 278 <!--We will use this model <a href="#pred">below</a> to illustrate online prediction.-->
Daniel@0 279
Daniel@0 280
Daniel@0 281
Daniel@0 282 <h2><a name="water">Water network</h2>
Daniel@0 283
Daniel@0 284 Consider the following model
Daniel@0 285 of a water purification plant, developed
Daniel@0 286 by Finn V. Jensen, Uffe Kjærulff, Kristian G. Olesen, and Jan
Daniel@0 287 Pedersen.
Daniel@0 288 <!--
Daniel@0 289 The clear nodes represent the hidden state of the system in
Daniel@0 290 factored form, and the shaded nodes represent the observations in
Daniel@0 291 factored form.
Daniel@0 292 -->
Daniel@0 293 <!--
Daniel@0 294 (Click <a
Daniel@0 295 href="http://www-nt.cs.berkeley.edu/home/nir/public_html/Repository/water.htm">here</a>
Daniel@0 296 for more details on this model.
Daniel@0 297 Following Boyen and Koller, we have added discrete evidence nodes.)
Daniel@0 298 -->
Daniel@0 299 <!--
Daniel@0 300 We have "unrolled" the model for three "time slices" -- the structure and parameters are
Daniel@0 301 assumed to repeat as the model is unrolled further.
Daniel@0 302 Hence to specify a DBN, we need to
Daniel@0 303 define the intra-slice topology (within a slice),
Daniel@0 304 the inter-slice topology (between two slices),
Daniel@0 305 as well as the parameters for the first two slices.
Daniel@0 306 (Such a two-slice temporal Bayes net is often called a 2TBN.)
Daniel@0 307 -->
Daniel@0 308 <p>
Daniel@0 309 <center>
Daniel@0 310 <IMG SRC="Figures/water3_75.gif">
Daniel@0 311 </center>
Daniel@0 312 We now show how to specify this model in BNT.
Daniel@0 313 <pre>
Daniel@0 314 ss = 12; % slice size
Daniel@0 315 intra = zeros(ss);
Daniel@0 316 intra(1,9) = 1;
Daniel@0 317 intra(3,10) = 1;
Daniel@0 318 intra(4,11) = 1;
Daniel@0 319 intra(8,12) = 1;
Daniel@0 320
Daniel@0 321 inter = zeros(ss);
Daniel@0 322 inter(1, [1 3]) = 1; % node 1 in slice 1 connects to nodes 1 and 3 in slice 2
Daniel@0 323 inter(2, [2 3 7]) = 1;
Daniel@0 324 inter(3, [3 4 5]) = 1;
Daniel@0 325 inter(4, [3 4 6]) = 1;
Daniel@0 326 inter(5, [3 5 6]) = 1;
Daniel@0 327 inter(6, [4 5 6]) = 1;
Daniel@0 328 inter(7, [7 8]) = 1;
Daniel@0 329 inter(8, [6 7 8]) = 1;
Daniel@0 330
Daniel@0 331 onodes = 9:12; % observed
Daniel@0 332 dnodes = 1:ss; % discrete
Daniel@0 333 ns = 2*ones(1,ss); % binary nodes
Daniel@0 334 eclass1 = 1:12;
Daniel@0 335 eclass2 = [13:20 9:12];
Daniel@0 336 eclass = [eclass1 eclass2];
Daniel@0 337 bnet = mk_dbn(intra, inter, ns, 'discrete', dnodes, 'eclass1', eclass1, 'eclass2', eclass2);
Daniel@0 338 for e=1:max(eclass)
Daniel@0 339 bnet.CPD{e} = tabular_CPD(bnet, e);
Daniel@0 340 end
Daniel@0 341 </pre>
Daniel@0 342 We have tied the observation parameters across all slices.
Daniel@0 343 Click <a href="param_tieing.html">here</a> for a more complex example
Daniel@0 344 of parameter tieing.
Daniel@0 345
Daniel@0 346 <!--
Daniel@0 347 Let X(i,t) denote the i'th hidden node in slice t,
Daniel@0 348 and Y(i,y) denote the i'th observed node in slice t.
Daniel@0 349 We also use the notation Nj to refer to the j'th node in the
Daniel@0 350 unrolled network, e.g., N25 = X(1,3), N33 = Y(1,3).
Daniel@0 351 <p>
Daniel@0 352 We assume the distributions P(X(i,t) | X(i,t-1)) and
Daniel@0 353 P(Y(i,t) | X(i,t)) are independent of t for t > 1 and for all i.
Daniel@0 354 Hence the CPD for N25, N37, ... is the same as for N13, so we say they
Daniel@0 355 are in the same equivalence class, with N13 being the "representative"
Daniel@0 356 for this class. In other words, we have tied the parameters for nodes
Daniel@0 357 N13, N25, N37, ...
Daniel@0 358 Note, however, that the parameters for the nodes in the first slice
Daniel@0 359 are not tied, so each equivalence class for nodes 1..12 contains a
Daniel@0 360 single node.
Daniel@0 361 <p>
Daniel@0 362 Above we assumed P(Y(i,t) | X(i,t)) is independent of t for t>1, but
Daniel@0 363 it is conventional to assume this is true for all t.
Daniel@0 364 So we would like to put N9, N21, N33, ... all in the same class, and
Daniel@0 365 similarly for the other observed nodes.
Daniel@0 366 We can do this by explicitely defining the equivalence classes, as
Daniel@0 367 follows.
Daniel@0 368 <p>
Daniel@0 369 We define eclass1(i) to be the equivalence class that node i in slice
Daniel@0 370 1 belongs to.
Daniel@0 371 Similarly, we define eclass2(i) to be the equivalence class that node i in slice
Daniel@0 372 2, 3, ..., belongs to.
Daniel@0 373 For the water model, we have
Daniel@0 374 <pre>
Daniel@0 375 </pre>
Daniel@0 376 This ties the observation model across slices,
Daniel@0 377 since e.g., eclass(9) = eclass(21) = 9, so Y(1,1) and Y(1,2) belong to the
Daniel@0 378 same class.
Daniel@0 379 <p>
Daniel@0 380 By default,
Daniel@0 381 eclass1 = 1:ss, and eclass2 = (1:ss)+ss, where ss = slice size = the
Daniel@0 382 number of nodes per slice.
Daniel@0 383 This will tie nodes in slices 3, 4, ... to the the nodes in slice 2,
Daniel@0 384 but none of the nodes in slice 2 to any in slice 1.
Daniel@0 385 By using the above tieing pattern,
Daniel@0 386 we now only have 20 CPDs to specify, instead of 24:
Daniel@0 387 <pre>
Daniel@0 388 bnet = mk_dbn(intra, inter, ns, dnodes, eclass1, eclass2);
Daniel@0 389 for e=1:max(eclass)
Daniel@0 390 bnet.CPD{e} = tabular_CPD(bnet, e);
Daniel@0 391 end
Daniel@0 392 </pre>
Daniel@0 393 -->
Daniel@0 394
Daniel@0 395
Daniel@0 396
Daniel@0 397 <h2><a name="bat">BATnet</h2>
Daniel@0 398
Daniel@0 399 As an example of a more complicated DBN, consider the following
Daniel@0 400 example,
Daniel@0 401 which is a model of a car's high level state, as might be used by
Daniel@0 402 an automated car.
Daniel@0 403 (The model is from Forbes, Huang, Kanazawa and Russell, "The BATmobile: Towards a
Daniel@0 404 Bayesian Automated Taxi", IJCAI 95. The figure is from
Daniel@0 405 Boyen and Koller, "Tractable Inference for Complex Stochastic
Daniel@0 406 Processes", UAI98.
Daniel@0 407 For simplicity, we only show the observed nodes for slice 2.)
Daniel@0 408 <p>
Daniel@0 409 <center>
Daniel@0 410 <IMG SRC="Figures/batnet.gif">
Daniel@0 411 </center>
Daniel@0 412 <p>
Daniel@0 413 Since this topology is so complicated,
Daniel@0 414 it is useful to be able to refer to the nodes by name, instead of
Daniel@0 415 number.
Daniel@0 416 <pre>
Daniel@0 417 names = {'LeftClr', 'RightClr', 'LatAct', ... 'Bclr', 'BYdotDiff'};
Daniel@0 418 ss = length(names);
Daniel@0 419 </pre>
Daniel@0 420 We can specify the intra-slice topology using a cell array as follows,
Daniel@0 421 where each row specifies a connection between two named nodes:
Daniel@0 422 <pre>
Daniel@0 423 intrac = {...
Daniel@0 424 'LeftClr', 'LeftClrSens';
Daniel@0 425 'RightClr', 'RightClrSens';
Daniel@0 426 ...
Daniel@0 427 'BYdotDiff', 'BcloseFast'};
Daniel@0 428 </pre>
Daniel@0 429 Finally, we can convert this cell array to an adjacency matrix using
Daniel@0 430 the following function:
Daniel@0 431 <pre>
Daniel@0 432 [intra, names] = mk_adj_mat(intrac, names, 1);
Daniel@0 433 </pre>
Daniel@0 434 This function also permutes the names so that they are in topological
Daniel@0 435 order.
Daniel@0 436 Given this ordering of the names, we can make the inter-slice
Daniel@0 437 connectivity matrix as follows:
Daniel@0 438 <pre>
Daniel@0 439 interc = {...
Daniel@0 440 'LeftClr', 'LeftClr';
Daniel@0 441 'LeftClr', 'LatAct';
Daniel@0 442 ...
Daniel@0 443 'FBStatus', 'LatAct'};
Daniel@0 444
Daniel@0 445 inter = mk_adj_mat(interc, names, 0);
Daniel@0 446 </pre>
Daniel@0 447
Daniel@0 448 To refer to a node, we must know its number, which can be computed as
Daniel@0 449 in the following example:
Daniel@0 450 <pre>
Daniel@0 451 obs = {'LeftClrSens', 'RightClrSens', 'TurnSignalSens', 'XdotSens', 'YdotSens', 'FYdotDiffSens', ...
Daniel@0 452 'FclrSens', 'BXdotSens', 'BclrSens', 'BYdotDiffSens'};
Daniel@0 453 for i=1:length(obs)
Daniel@0 454 onodes(i) = stringmatch(obs{i}, names);
Daniel@0 455 end
Daniel@0 456 onodes = sort(onodes);
Daniel@0 457 </pre>
Daniel@0 458 (We sort the onodes since most BNT routines assume that set-valued
Daniel@0 459 arguments are in sorted order.)
Daniel@0 460 We can now make the DBN:
Daniel@0 461 <pre>
Daniel@0 462 dnodes = 1:ss;
Daniel@0 463 ns = 2*ones(1,ss); % binary nodes
Daniel@0 464 bnet = mk_dbn(intra, inter, ns, 'iscrete', dnodes);
Daniel@0 465 </pre>
Daniel@0 466 To specify the parameters, we must know the order of the parents.
Daniel@0 467 See the function BNT/general/mk_named_CPT for a way to do this in the
Daniel@0 468 case of tabular nodes. For simplicity, we just generate random
Daniel@0 469 parameters:
Daniel@0 470 <pre>
Daniel@0 471 for i=1:2*ss
Daniel@0 472 bnet.CPD{i} = tabular_CPD(bnet, i);
Daniel@0 473 end
Daniel@0 474 </pre>
Daniel@0 475 A complete version of this example is available in BNT/examples/dynamic/bat1.m.
Daniel@0 476
Daniel@0 477
Daniel@0 478
Daniel@0 479
Daniel@0 480 <h1><a name="inf">Inference</h1>
Daniel@0 481
Daniel@0 482
Daniel@0 483 The general inference problem for DBNs is to compute
Daniel@0 484 P(X(i,t0) | Y(:, t1:t2)), where X(i,t) represents the i'th hidden
Daniel@0 485 variable at time t and Y(:,t1:t2) represents all the evidence
Daniel@0 486 between times t1 and t2.
Daniel@0 487 There are several special cases of interest, illustrated below.
Daniel@0 488 The arrow indicates t0: it is X(t0) that we are trying to estimate.
Daniel@0 489 The shaded region denotes t1:t2, the available data.
Daniel@0 490 <p>
Daniel@0 491
Daniel@0 492 <img src="Figures/filter.gif">
Daniel@0 493
Daniel@0 494 <p>
Daniel@0 495 BNT can currently only handle offline smoothing.
Daniel@0 496 (The HMM engine handles filtering and, to a limited extent, prediction.)
Daniel@0 497 The usage is similar to static
Daniel@0 498 inference engines, except now the evidence is a 2D cell array of
Daniel@0 499 size ss*T, where ss is the number of nodes per slice (ss = slice sizee) and T is the
Daniel@0 500 number of slices.
Daniel@0 501 Also, 'marginal_nodes' takes two arguments, the nodes and the time-slice.
Daniel@0 502 For example, to compute P(X(i,t) | y(:,1:T)), we proceed as follows
Daniel@0 503 (where onodes are the indices of the observedd nodes in each slice,
Daniel@0 504 which correspond to y):
Daniel@0 505 <pre>
Daniel@0 506 ev = sample_dbn(bnet, T);
Daniel@0 507 evidence = cell(ss,T);
Daniel@0 508 evidence(onodes,:) = ev(onodes, :); % all cells besides onodes are empty
Daniel@0 509 [engine, ll] = enter_evidence(engine, evidence);
Daniel@0 510 marg = marginal_nodes(engine, i, t);
Daniel@0 511 </pre>
Daniel@0 512
Daniel@0 513
Daniel@0 514 <h2><a name="discrete">Discrete hidden nodes</h2>
Daniel@0 515
Daniel@0 516 If all the hidden nodes are discrete,
Daniel@0 517 we can use the junction tree algorithm to perform inference.
Daniel@0 518 The simplest approach,
Daniel@0 519 <tt>jtree_unrolled_dbn_inf_engine</tt>,
Daniel@0 520 unrolls the DBN into a static network and applies jtree; however, for
Daniel@0 521 long sequences, this
Daniel@0 522 can be very slow and can result in numerical underflow.
Daniel@0 523 A better approach is to apply the jtree algorithm to pairs of
Daniel@0 524 neighboring slices at a time; this is implemented in
Daniel@0 525 <tt>jtree_dbn_inf_engine</tt>.
Daniel@0 526
Daniel@0 527 <p>
Daniel@0 528 A DBN can be converted to an HMM if all the hidden nodes are discrete.
Daniel@0 529 In this case, you can use
Daniel@0 530 <tt>hmm_inf_engine</tt>. This is faster than jtree for small models
Daniel@0 531 because the constant factors of the algorithm are lower, but can be
Daniel@0 532 exponentially slower for models with many variables
Daniel@0 533 (e.g., > 6 binary hidden nodes).
Daniel@0 534
Daniel@0 535 <p>
Daniel@0 536 The use of both
Daniel@0 537 <tt>jtree_dbn_inf_engine</tt>
Daniel@0 538 and
Daniel@0 539 <tt>hmm_inf_engine</tt>
Daniel@0 540 is deprecated.
Daniel@0 541 A better approach is to construct a smoother engine out of lower-level
Daniel@0 542 engines, which implement forward/backward operators.
Daniel@0 543 You can create these engines as follows.
Daniel@0 544 <pre>
Daniel@0 545 engine = smoother_engine(hmm_2TBN_inf_engine(bnet));
Daniel@0 546 or
Daniel@0 547 engine = smoother_engine(jtree_2TBN_inf_engine(bnet));
Daniel@0 548 </pre>
Daniel@0 549 You then call them in the usual way:
Daniel@0 550 <pre>
Daniel@0 551 engine = enter_evidence(engine, evidence);
Daniel@0 552 m = marginal_nodes(engine, nodes, t);
Daniel@0 553 </pre>
Daniel@0 554 Note: you must declare the observed nodes in the bnet before using
Daniel@0 555 hmm_2TBN_inf_engine.
Daniel@0 556
Daniel@0 557
Daniel@0 558 <p>
Daniel@0 559 Unfortunately, when all the hiddden nodes are discrete,
Daniel@0 560 exact inference takes O(2^n) time, where n is the number of hidden
Daniel@0 561 nodes per slice,
Daniel@0 562 even if the model is sparse.
Daniel@0 563 The basic reason for this is that two nodes become correlated, even if
Daniel@0 564 there is no direct connection between them in the 2TBN,
Daniel@0 565 by virtue of sharing common ancestors in the past.
Daniel@0 566 Hence we need to use approximations.
Daniel@0 567 <p>
Daniel@0 568 A popular approximate inference algorithm for discrete DBNs, known as BK, is described in
Daniel@0 569 <ul>
Daniel@0 570 <li>
Daniel@0 571 <A HREF="http://robotics.Stanford.EDU/~xb/uai98/index.html">
Daniel@0 572 Tractable inference for complex stochastic processes </A>,
Daniel@0 573 Boyen and Koller, UAI 1998
Daniel@0 574 <li>
Daniel@0 575 <A HREF="http://robotics.Stanford.EDU/~xb/nips98/index.html">
Daniel@0 576 Approximate learning of dynamic models</a>, Boyen and Koller, NIPS
Daniel@0 577 1998.
Daniel@0 578 </ul>
Daniel@0 579 This approximates the belief state with a product of
Daniel@0 580 marginals on a specified set of clusters. For example,
Daniel@0 581 in the water network, we might use the following clusters:
Daniel@0 582 <pre>
Daniel@0 583 engine = bk_inf_engine(bnet, { [1 2], [3 4 5 6], [7 8] });
Daniel@0 584 </pre>
Daniel@0 585 This engine can now be used just like the jtree engine.
Daniel@0 586 Two special cases of the BK algorithm are supported: 'ff' (fully
Daniel@0 587 factored) means each node has its own cluster, and 'exact' means there
Daniel@0 588 is 1 cluster that contains the whole slice. These can be created as
Daniel@0 589 follows:
Daniel@0 590 <pre>
Daniel@0 591 engine = bk_inf_engine(bnet, 'ff');
Daniel@0 592 engine = bk_inf_engine(bnet, 'exact');
Daniel@0 593 </pre>
Daniel@0 594 For pedagogical purposes, an implementation of BK-FF that uses an HMM
Daniel@0 595 instead of junction tree is available at
Daniel@0 596 <tt>bk_ff_hmm_inf_engine</tt>.
Daniel@0 597
Daniel@0 598
Daniel@0 599
Daniel@0 600 <h2><a name="cts">Continuous hidden nodes</h2>
Daniel@0 601
Daniel@0 602 If all the hidden nodes are linear-Gaussian, <em>and</em> the observed nodes are
Daniel@0 603 linear-Gaussian,
Daniel@0 604 the model is a <a href="http://www.cs.berkeley.edu/~murphyk/Bayes/kalman.html">
Daniel@0 605 linear dynamical system</a> (LDS).
Daniel@0 606 A DBN can be converted to an LDS if all the hidden nodes are linear-Gaussian
Daniel@0 607 and if they are all persistent. In this case, you can use
Daniel@0 608 <tt>kalman_inf_engine</tt>.
Daniel@0 609 For more general linear-gaussian models, you can use
Daniel@0 610 <tt>jtree_dbn_inf_engine</tt> or <tt>jtree_unrolled_dbn_inf_engine</tt>.
Daniel@0 611
Daniel@0 612 <p>
Daniel@0 613 For nonlinear systems with Gaussian noise, the unscented Kalman filter (UKF),
Daniel@0 614 due to Julier and Uhlmann, is far superior to the well-known extended Kalman
Daniel@0 615 filter (EKF), both in theory and practice.
Daniel@0 616 <!--
Daniel@0 617 See
Daniel@0 618 <A HREF="http://phoebe.robots.ox.ac.uk/default.html">"A General Method for
Daniel@0 619 Approximating Nonlinear Transformations of
Daniel@0 620 Probability Distributions"</A>.
Daniel@0 621 (If the above link is down,
Daniel@0 622 try <a href="http://www.ece.ogi.edu/~ericwan/pubs.html">Eric Wan's</a>
Daniel@0 623 page, who has done a lot of work on the UKF.)
Daniel@0 624 <p>
Daniel@0 625 -->
Daniel@0 626 The key idea of the UKF is that it is easier to estimate a Gaussian distribution
Daniel@0 627 from a set of points than to approximate an arbitrary non-linear
Daniel@0 628 function.
Daniel@0 629 We start with points that are plus/minus sigma away from the mean along
Daniel@0 630 each dimension, and then pipe them through the nonlinearity, and
Daniel@0 631 then fit a Gaussian to the transformed points.
Daniel@0 632 (No need to compute Jacobians, unlike the EKF!)
Daniel@0 633
Daniel@0 634 <p>
Daniel@0 635 For systems with non-Gaussian noise, I recommend
Daniel@0 636 <a href="http://www.cs.berkeley.edu/~jfgf/smc/">Particle
Daniel@0 637 filtering</a> (PF), which is a popular sequential Monte Carlo technique.
Daniel@0 638
Daniel@0 639 <p>
Daniel@0 640 The EKF can be used as a proposal distribution for a PF.
Daniel@0 641 This method is better than either one alone.
Daniel@0 642 See <a href="http://www.cs.berkeley.edu/~jfgf/upf.ps.gz">The Unscented Particle Filter</a>,
Daniel@0 643 by R van der Merwe, A Doucet, JFG de Freitas and E Wan, May 2000.
Daniel@0 644 <a href="http://www.cs.berkeley.edu/~jfgf/software.html">Matlab
Daniel@0 645 software</a> for the UPF is also available.
Daniel@0 646 <p>
Daniel@0 647 Note: none of this software is part of BNT.
Daniel@0 648
Daniel@0 649
Daniel@0 650
Daniel@0 651 <h1><a name="learn">Learning</h1>
Daniel@0 652
Daniel@0 653 Learning in DBNs can be done online or offline.
Daniel@0 654 Currently only offline learning is implemented in BNT.
Daniel@0 655
Daniel@0 656
Daniel@0 657 <h2><a name="param_learn">Parameter learning</h2>
Daniel@0 658
Daniel@0 659 Offline parameter learning is very similar to learning in static networks,
Daniel@0 660 except now the training data is a cell-array of 2D cell-arrays.
Daniel@0 661 For example,
Daniel@0 662 cases{l}{i,t} is the value of node i in slice t in sequence l, or []
Daniel@0 663 if unobserved.
Daniel@0 664 Each sequence can be a different length, and may have missing values
Daniel@0 665 in arbitrary locations.
Daniel@0 666 Here is a typical code fragment for using EM.
Daniel@0 667 <pre>
Daniel@0 668 ncases = 2;
Daniel@0 669 cases = cell(1, ncases);
Daniel@0 670 for i=1:ncases
Daniel@0 671 ev = sample_dbn(bnet, T);
Daniel@0 672 cases{i} = cell(ss,T);
Daniel@0 673 cases{i}(onodes,:) = ev(onodes, :);
Daniel@0 674 end
Daniel@0 675 [bnet2, LLtrace] = learn_params_dbn_em(engine, cases, 'max_iter', 10);
Daniel@0 676 </pre>
Daniel@0 677 If the observed node is vector-valued and stored in an OxT array, you
Daniel@0 678 need to assign each vector to a single cell, as in the following
Daniel@0 679 example.
Daniel@0 680 <pre>
Daniel@0 681 data = [xpos(:)'; ypos(:)'];
Daniel@0 682 ncases = 1;
Daniel@0 683 cases = cell(1, ncases);
Daniel@0 684 onodes = bnet.observed;
Daniel@0 685 for i=1:ncases
Daniel@0 686 cases{i} = cell(ss,T);
Daniel@0 687 cases{i}(onodes,:) = num2cell(data(:,1:T), 1);
Daniel@0 688 end
Daniel@0 689 </pre>
Daniel@0 690 <p>
Daniel@0 691 For a complete code listing of how to do EM in a simple DBN, click
Daniel@0 692 <a href="dbn_hmm_demo.m">here</a>.
Daniel@0 693
Daniel@0 694 <h2><a name="struct_learn">Structure learning</h2>
Daniel@0 695
Daniel@0 696 There is currently only one structure learning algorithm for DBNs.
Daniel@0 697 This assumes all nodes are tabular and observed, and that there are
Daniel@0 698 no intra-slice connections. Hence we can find the optimal set of
Daniel@0 699 parents for each node separately, without worrying about directed
Daniel@0 700 cycles or node orderings.
Daniel@0 701 The function is called as follows
Daniel@0 702 <pre>
Daniel@0 703 inter = learn_struct_dbn_reveal(cases, ns, max_fan_in, penalty)
Daniel@0 704 </pre>
Daniel@0 705 A full example is given in BNT/examples/dynamic/reveal1.m.
Daniel@0 706 Setting the penalty term to 0 gives the maximum likelihood model; this
Daniel@0 707 is equivalent to maximizing the mutual information between parents and
Daniel@0 708 child (in the bioinformatics community, this is known as the REVEAL
Daniel@0 709 algorithm). A non-zero penalty invokes the BIC criterion, which
Daniel@0 710 lessens the chance of overfitting.
Daniel@0 711 <p>
Daniel@0 712 <a href="http://www.bioss.sari.ac.uk/~dirk/software/DBmcmc/">
Daniel@0 713 Dirk Husmeier has extended MCMC model selection to DBNs</a>.
Daniel@0 714
Daniel@0 715 </BODY>