annotate toolboxes/FullBNT-1.0.7/docs/usage_dbn.html @ 0:e9a9cd732c1e tip

first hg version after svn
author wolffd
date Tue, 10 Feb 2015 15:05:51 +0000
parents
children
rev   line source
wolffd@0 1 <HEAD>
wolffd@0 2 <TITLE>How to use BNT for DBNs</TITLE>
wolffd@0 3 </HEAD>
wolffd@0 4
wolffd@0 5 <BODY BGCOLOR="#FFFFFF">
wolffd@0 6 <!-- white background is better for the pictures and equations -->
wolffd@0 7
wolffd@0 8 Documentation last updated on 7 June 2004
wolffd@0 9
wolffd@0 10 <h1>How to use BNT for DBNs</h1>
wolffd@0 11
wolffd@0 12 <p>
wolffd@0 13 <ul>
wolffd@0 14 <li> <a href="#spec">Model specification</a>
wolffd@0 15 <ul>
wolffd@0 16 <li> <a href="#hmm">HMMs</a>
wolffd@0 17 <li> <a href="#lds">Kalman filters</a>
wolffd@0 18 <li> <a href="#chmm">Coupled HMMs</a>
wolffd@0 19 <li> <a href="#water">Water network</a>
wolffd@0 20 <li> <a href="#bat">BAT network</a>
wolffd@0 21 </ul>
wolffd@0 22
wolffd@0 23 <li> <a href="#inf">Inference</a>
wolffd@0 24 <ul>
wolffd@0 25 <li> <a href="#discrete">Discrete hidden nodes</a>
wolffd@0 26 <li> <a href="#cts">Continuous hidden nodes</a>
wolffd@0 27 </ul>
wolffd@0 28
wolffd@0 29 <li> <a href="#learn">Learning</a>
wolffd@0 30 <ul>
wolffd@0 31 <li> <a href="#param_learn">Parameter learning</a>
wolffd@0 32 <li> <a href="#struct_learn">Structure learning</a>
wolffd@0 33 </ul>
wolffd@0 34
wolffd@0 35 </ul>
wolffd@0 36
wolffd@0 37 Note:
wolffd@0 38 you are recommended to read an introduction
wolffd@0 39 to DBNs first, such as
wolffd@0 40 <a href="http://www.ai.mit.edu/~murphyk/Papers/dbnchapter.pdf">
wolffd@0 41 this book chapter</a>.
wolffd@0 42 <br>
wolffd@0 43 You may also want to consider using
wolffd@0 44 <a href=http://ssli.ee.washington.edu/~bilmes/gmtk/>GMTk</a>, which is
wolffd@0 45 an excellent C++ package for DBNs.
wolffd@0 46
wolffd@0 47
wolffd@0 48 <h1><a name="spec">Model specification</h1>
wolffd@0 49
wolffd@0 50
wolffd@0 51 <!--<h1><a name="dbn_intro">Dynamic Bayesian Networks (DBNs)</h1>-->
wolffd@0 52
wolffd@0 53 Dynamic Bayesian Networks (DBNs) are directed graphical models of stochastic
wolffd@0 54 processes.
wolffd@0 55 They generalise <a href="#hmm">hidden Markov models (HMMs)</a>
wolffd@0 56 and <a href="#lds">linear dynamical systems (LDSs)</a>
wolffd@0 57 by representing the hidden (and observed) state in terms of state
wolffd@0 58 variables, which can have complex interdependencies.
wolffd@0 59 The graphical structure provides an easy way to specify these
wolffd@0 60 conditional independencies, and hence to provide a compact
wolffd@0 61 parameterization of the model.
wolffd@0 62 <p>
wolffd@0 63 Note that "temporal Bayesian network" would be a better name than
wolffd@0 64 "dynamic Bayesian network", since
wolffd@0 65 it is assumed that the model structure does not change, but
wolffd@0 66 the term DBN has become entrenched.
wolffd@0 67 We also normally assume that the parameters do not
wolffd@0 68 change, i.e., the model is time-invariant.
wolffd@0 69 However, we can always add extra
wolffd@0 70 hidden nodes to represent the current "regime", thereby creating
wolffd@0 71 mixtures of models to capture periodic non-stationarities.
wolffd@0 72 <p>
wolffd@0 73 There are some cases where the size of the state space can change over
wolffd@0 74 time, e.g., tracking a variable, but unknown, number of objects.
wolffd@0 75 In this case, we need to change the model structure over time.
wolffd@0 76 BNT does not support this.
wolffd@0 77 <!--
wolffd@0 78 , but see the following paper for a
wolffd@0 79 discussion of some of the issues:
wolffd@0 80 <ul>
wolffd@0 81 <li> <a href="ftp://ftp.cs.monash.edu.au/pub/annn/smc.ps">
wolffd@0 82 Dynamic belief networks for discrete monitoring</a>,
wolffd@0 83 A. E. Nicholson and J. M. Brady.
wolffd@0 84 IEEE Systems, Man and Cybernetics, 24(11):1593-1610, 1994.
wolffd@0 85 </ul>
wolffd@0 86 -->
wolffd@0 87
wolffd@0 88
wolffd@0 89 <h2><a name="hmm">Hidden Markov Models (HMMs)</h2>
wolffd@0 90
wolffd@0 91 The simplest kind of DBN is a Hidden Markov Model (HMM), which has
wolffd@0 92 one discrete hidden node and one discrete or continuous
wolffd@0 93 observed node per slice. We illustrate this below.
wolffd@0 94 As before, circles denote continuous nodes, squares denote
wolffd@0 95 discrete nodes, clear means hidden, shaded means observed.
wolffd@0 96 <!--
wolffd@0 97 (The observed nodes can be
wolffd@0 98 discrete or continuous; the crucial thing about an HMM is that the
wolffd@0 99 hidden nodes are discrete, so the system can model arbitrary dynamics
wolffd@0 100 -- providing, of course, that the hidden state space is large enough.)
wolffd@0 101 -->
wolffd@0 102 <p>
wolffd@0 103 <img src="Figures/hmm3.gif">
wolffd@0 104 <p>
wolffd@0 105 We have "unrolled" the model for three "time slices" -- the structure and parameters are
wolffd@0 106 assumed to repeat as the model is unrolled further.
wolffd@0 107 Hence to specify a DBN, we need to
wolffd@0 108 define the intra-slice topology (within a slice),
wolffd@0 109 the inter-slice topology (between two slices),
wolffd@0 110 as well as the parameters for the first two slices.
wolffd@0 111 (Such a two-slice temporal Bayes net is often called a 2TBN.)
wolffd@0 112 <p>
wolffd@0 113 We can specify the topology as follows.
wolffd@0 114 <PRE>
wolffd@0 115 intra = zeros(2);
wolffd@0 116 intra(1,2) = 1; % node 1 in slice t connects to node 2 in slice t
wolffd@0 117
wolffd@0 118 inter = zeros(2);
wolffd@0 119 inter(1,1) = 1; % node 1 in slice t-1 connects to node 1 in slice t
wolffd@0 120 </pre>
wolffd@0 121 We can specify the parameters as follows,
wolffd@0 122 where for simplicity we assume the observed node is discrete.
wolffd@0 123 <pre>
wolffd@0 124 Q = 2; % num hidden states
wolffd@0 125 O = 2; % num observable symbols
wolffd@0 126
wolffd@0 127 ns = [Q O];
wolffd@0 128 dnodes = 1:2;
wolffd@0 129 bnet = mk_dbn(intra, inter, ns, 'discrete', dnodes);
wolffd@0 130 for i=1:4
wolffd@0 131 bnet.CPD{i} = tabular_CPD(bnet, i);
wolffd@0 132 end
wolffd@0 133 </pre>
wolffd@0 134 <p>
wolffd@0 135 We assume the distributions P(X(t) | X(t-1)) and
wolffd@0 136 P(Y(t) | X(t)) are independent of t for t > 1.
wolffd@0 137 Hence the CPD for nodes 5, 7, ... is the same as for node 3, so we say they
wolffd@0 138 are in the same equivalence class, with node 3 being the "representative"
wolffd@0 139 for this class. In other words, we have tied the parameters for nodes
wolffd@0 140 3, 5, 7, ...
wolffd@0 141 Similarly, nodes 4, 6, 8, ... are tied.
wolffd@0 142 Note, however, that (the parameters for) nodes 1 and 2 are not tied to
wolffd@0 143 subsequent slices.
wolffd@0 144 <p>
wolffd@0 145 Above we assumed the observation model P(Y(t) | X(t)) is independent of t for t>1, but
wolffd@0 146 it is conventional to assume this is true for all t.
wolffd@0 147 So we would like to put nodes 2, 4, 6, ... all in the same class.
wolffd@0 148 We can do this by explicitely defining the equivalence classes, as
wolffd@0 149 follows (see <a href="usage.html#tying">here</a> for more details on
wolffd@0 150 parameter tying).
wolffd@0 151 <p>
wolffd@0 152 We define eclass1(i) to be the equivalence class that node i in slice
wolffd@0 153 1 belongs to.
wolffd@0 154 Similarly, we define eclass2(i) to be the equivalence class that node i in slice
wolffd@0 155 2, 3, ..., belongs to.
wolffd@0 156 For an HMM, we have
wolffd@0 157 <pre>
wolffd@0 158 eclass1 = [1 2];
wolffd@0 159 eclass2 = [3 2];
wolffd@0 160 eclass = [eclass1 eclass2];
wolffd@0 161 </pre>
wolffd@0 162 This ties the observation model across slices,
wolffd@0 163 since e.g., eclass(4) = eclass(2) = 2.
wolffd@0 164 <p>
wolffd@0 165 By default,
wolffd@0 166 eclass1 = 1:ss, and eclass2 = (1:ss)+ss, where ss = slice size = the
wolffd@0 167 number of nodes per slice.
wolffd@0 168 <!--This will tie nodes in slices 3, 4, ... to the the nodes in slice 2,
wolffd@0 169 but none of the nodes in slice 2 to any in slice 1.-->
wolffd@0 170 But by using the above tieing pattern,
wolffd@0 171 we now only have 3 CPDs to specify, instead of 4:
wolffd@0 172 <pre>
wolffd@0 173 bnet = mk_dbn(intra, inter, ns, 'discrete', dnodes, 'eclass1', eclass1, 'eclass2', eclass2);
wolffd@0 174 prior0 = normalise(rand(Q,1));
wolffd@0 175 transmat0 = mk_stochastic(rand(Q,Q));
wolffd@0 176 obsmat0 = mk_stochastic(rand(Q,O));
wolffd@0 177 bnet.CPD{1} = tabular_CPD(bnet, 1, prior0);
wolffd@0 178 bnet.CPD{2} = tabular_CPD(bnet, 2, obsmat0);
wolffd@0 179 bnet.CPD{3} = tabular_CPD(bnet, 3, transmat0);
wolffd@0 180 </pre>
wolffd@0 181 We discuss how to do <a href="#inf">inference</a> and <a href="#learn">learning</a> on this model
wolffd@0 182 below.
wolffd@0 183 (See also
wolffd@0 184 my <a href="../HMM/hmm.html">HMM toolbox</a>, which is included with BNT.)
wolffd@0 185
wolffd@0 186 <p>
wolffd@0 187 Some common variants on HMMs are shown below.
wolffd@0 188 BNT can handle all of these.
wolffd@0 189 <p>
wolffd@0 190 <center>
wolffd@0 191 <table>
wolffd@0 192 <tr>
wolffd@0 193 <td><img src="Figures/hmm_gauss.gif">
wolffd@0 194 <td><img src="Figures/hmm_mixgauss.gif"
wolffd@0 195 <td><img src="Figures/hmm_ar.gif">
wolffd@0 196 <tr>
wolffd@0 197 <td><img src="Figures/hmm_factorial.gif">
wolffd@0 198 <td><img src="Figures/hmm_coupled.gif"
wolffd@0 199 <td><img src="Figures/hmm_io.gif">
wolffd@0 200 <tr>
wolffd@0 201 </table>
wolffd@0 202 </center>
wolffd@0 203
wolffd@0 204
wolffd@0 205
wolffd@0 206 <h2><a name="lds">Linear Dynamical Systems (LDSs) and Kalman filters</h2>
wolffd@0 207
wolffd@0 208 A Linear Dynamical System (LDS) has the same topology as an HMM, but
wolffd@0 209 all the nodes are assumed to have linear-Gaussian distributions, i.e.,
wolffd@0 210 <pre>
wolffd@0 211 x(t+1) = A*x(t) + w(t), w ~ N(0, Q), x(0) ~ N(init_x, init_V)
wolffd@0 212 y(t) = C*x(t) + v(t), v ~ N(0, R)
wolffd@0 213 </pre>
wolffd@0 214 Some simple variants are shown below.
wolffd@0 215 <p>
wolffd@0 216 <center>
wolffd@0 217 <table>
wolffd@0 218 <tr>
wolffd@0 219 <td><img src="Figures/ar1.gif">
wolffd@0 220 <td><img src="Figures/sar.gif">
wolffd@0 221 <td><img src="Figures/kf.gif">
wolffd@0 222 <td><img src="Figures/skf.gif">
wolffd@0 223 </table>
wolffd@0 224 </center>
wolffd@0 225 <p>
wolffd@0 226
wolffd@0 227 We can create a regular LDS in BNT as follows.
wolffd@0 228 <pre>
wolffd@0 229
wolffd@0 230 intra = zeros(2);
wolffd@0 231 intra(1,2) = 1;
wolffd@0 232 inter = zeros(2);
wolffd@0 233 inter(1,1) = 1;
wolffd@0 234 n = 2;
wolffd@0 235
wolffd@0 236 X = 2; % size of hidden state
wolffd@0 237 Y = 2; % size of observable state
wolffd@0 238
wolffd@0 239 ns = [X Y];
wolffd@0 240 dnodes = [];
wolffd@0 241 onodes = [2];
wolffd@0 242 eclass1 = [1 2];
wolffd@0 243 eclass2 = [3 2];
wolffd@0 244 bnet = mk_dbn(intra, inter, ns, 'discrete', dnodes, 'eclass1', eclass1, 'eclass2', eclass2);
wolffd@0 245
wolffd@0 246 x0 = rand(X,1);
wolffd@0 247 V0 = eye(X); % must be positive semi definite!
wolffd@0 248 C0 = rand(Y,X);
wolffd@0 249 R0 = eye(Y);
wolffd@0 250 A0 = rand(X,X);
wolffd@0 251 Q0 = eye(X);
wolffd@0 252
wolffd@0 253 bnet.CPD{1} = gaussian_CPD(bnet, 1, 'mean', x0, 'cov', V0, 'cov_prior_weight', 0);
wolffd@0 254 bnet.CPD{2} = gaussian_CPD(bnet, 2, 'mean', zeros(Y,1), 'cov', R0, 'weights', C0, ...
wolffd@0 255 'clamp_mean', 1, 'cov_prior_weight', 0);
wolffd@0 256 bnet.CPD{3} = gaussian_CPD(bnet, 3, 'mean', zeros(X,1), 'cov', Q0, 'weights', A0, ...
wolffd@0 257 'clamp_mean', 1, 'cov_prior_weight', 0);
wolffd@0 258 </pre>
wolffd@0 259 We discuss how to do <a href="#inf">inference</a> and <a href="#learn">learning</a> on this model
wolffd@0 260 below.
wolffd@0 261 (See also
wolffd@0 262 my <a href="../Kalman/kalman.html">Kalman filter toolbox</a>, which is included with BNT.)
wolffd@0 263 <p>
wolffd@0 264
wolffd@0 265
wolffd@0 266 <h2><a name="chmm">Coupled HMMs</h2>
wolffd@0 267
wolffd@0 268 Here is an example of a coupled HMM with N=5 chains, unrolled for T=3
wolffd@0 269 slices. Each hidden discrete node has a private observed Gaussian
wolffd@0 270 child.
wolffd@0 271 <p>
wolffd@0 272 <img src="Figures/chmm5.gif">
wolffd@0 273 <p>
wolffd@0 274 We can make this using the function
wolffd@0 275 <pre>
wolffd@0 276 Q = 2; % binary hidden nodes
wolffd@0 277 discrete_obs = 0; % cts observed nodes
wolffd@0 278 Y = 1; % scalar observed nodes
wolffd@0 279 bnet = mk_chmm(N, Q, Y, discrete_obs);
wolffd@0 280 </pre>
wolffd@0 281
wolffd@0 282 <!--We will use this model <a href="#pred">below</a> to illustrate online prediction.-->
wolffd@0 283
wolffd@0 284
wolffd@0 285
wolffd@0 286 <h2><a name="water">Water network</h2>
wolffd@0 287
wolffd@0 288 Consider the following model
wolffd@0 289 of a water purification plant, developed
wolffd@0 290 by Finn V. Jensen, Uffe Kjærulff, Kristian G. Olesen, and Jan
wolffd@0 291 Pedersen.
wolffd@0 292 <!--
wolffd@0 293 The clear nodes represent the hidden state of the system in
wolffd@0 294 factored form, and the shaded nodes represent the observations in
wolffd@0 295 factored form.
wolffd@0 296 -->
wolffd@0 297 <!--
wolffd@0 298 (Click <a
wolffd@0 299 href="http://www-nt.cs.berkeley.edu/home/nir/public_html/Repository/water.htm">here</a>
wolffd@0 300 for more details on this model.
wolffd@0 301 Following Boyen and Koller, we have added discrete evidence nodes.)
wolffd@0 302 -->
wolffd@0 303 <!--
wolffd@0 304 We have "unrolled" the model for three "time slices" -- the structure and parameters are
wolffd@0 305 assumed to repeat as the model is unrolled further.
wolffd@0 306 Hence to specify a DBN, we need to
wolffd@0 307 define the intra-slice topology (within a slice),
wolffd@0 308 the inter-slice topology (between two slices),
wolffd@0 309 as well as the parameters for the first two slices.
wolffd@0 310 (Such a two-slice temporal Bayes net is often called a 2TBN.)
wolffd@0 311 -->
wolffd@0 312 <p>
wolffd@0 313 <center>
wolffd@0 314 <IMG SRC="Figures/water3_75.gif">
wolffd@0 315 </center>
wolffd@0 316 We now show how to specify this model in BNT.
wolffd@0 317 <pre>
wolffd@0 318 ss = 12; % slice size
wolffd@0 319 intra = zeros(ss);
wolffd@0 320 intra(1,9) = 1;
wolffd@0 321 intra(3,10) = 1;
wolffd@0 322 intra(4,11) = 1;
wolffd@0 323 intra(8,12) = 1;
wolffd@0 324
wolffd@0 325 inter = zeros(ss);
wolffd@0 326 inter(1, [1 3]) = 1; % node 1 in slice 1 connects to nodes 1 and 3 in slice 2
wolffd@0 327 inter(2, [2 3 7]) = 1;
wolffd@0 328 inter(3, [3 4 5]) = 1;
wolffd@0 329 inter(4, [3 4 6]) = 1;
wolffd@0 330 inter(5, [3 5 6]) = 1;
wolffd@0 331 inter(6, [4 5 6]) = 1;
wolffd@0 332 inter(7, [7 8]) = 1;
wolffd@0 333 inter(8, [6 7 8]) = 1;
wolffd@0 334
wolffd@0 335 onodes = 9:12; % observed
wolffd@0 336 dnodes = 1:ss; % discrete
wolffd@0 337 ns = 2*ones(1,ss); % binary nodes
wolffd@0 338 eclass1 = 1:12;
wolffd@0 339 eclass2 = [13:20 9:12];
wolffd@0 340 eclass = [eclass1 eclass2];
wolffd@0 341 bnet = mk_dbn(intra, inter, ns, 'discrete', dnodes, 'eclass1', eclass1, 'eclass2', eclass2);
wolffd@0 342 for e=1:max(eclass)
wolffd@0 343 bnet.CPD{e} = tabular_CPD(bnet, e);
wolffd@0 344 end
wolffd@0 345 </pre>
wolffd@0 346 We have tied the observation parameters across all slices.
wolffd@0 347 Click <a href="param_tieing.html">here</a> for a more complex example
wolffd@0 348 of parameter tieing.
wolffd@0 349
wolffd@0 350 <!--
wolffd@0 351 Let X(i,t) denote the i'th hidden node in slice t,
wolffd@0 352 and Y(i,y) denote the i'th observed node in slice t.
wolffd@0 353 We also use the notation Nj to refer to the j'th node in the
wolffd@0 354 unrolled network, e.g., N25 = X(1,3), N33 = Y(1,3).
wolffd@0 355 <p>
wolffd@0 356 We assume the distributions P(X(i,t) | X(i,t-1)) and
wolffd@0 357 P(Y(i,t) | X(i,t)) are independent of t for t > 1 and for all i.
wolffd@0 358 Hence the CPD for N25, N37, ... is the same as for N13, so we say they
wolffd@0 359 are in the same equivalence class, with N13 being the "representative"
wolffd@0 360 for this class. In other words, we have tied the parameters for nodes
wolffd@0 361 N13, N25, N37, ...
wolffd@0 362 Note, however, that the parameters for the nodes in the first slice
wolffd@0 363 are not tied, so each equivalence class for nodes 1..12 contains a
wolffd@0 364 single node.
wolffd@0 365 <p>
wolffd@0 366 Above we assumed P(Y(i,t) | X(i,t)) is independent of t for t>1, but
wolffd@0 367 it is conventional to assume this is true for all t.
wolffd@0 368 So we would like to put N9, N21, N33, ... all in the same class, and
wolffd@0 369 similarly for the other observed nodes.
wolffd@0 370 We can do this by explicitely defining the equivalence classes, as
wolffd@0 371 follows.
wolffd@0 372 <p>
wolffd@0 373 We define eclass1(i) to be the equivalence class that node i in slice
wolffd@0 374 1 belongs to.
wolffd@0 375 Similarly, we define eclass2(i) to be the equivalence class that node i in slice
wolffd@0 376 2, 3, ..., belongs to.
wolffd@0 377 For the water model, we have
wolffd@0 378 <pre>
wolffd@0 379 </pre>
wolffd@0 380 This ties the observation model across slices,
wolffd@0 381 since e.g., eclass(9) = eclass(21) = 9, so Y(1,1) and Y(1,2) belong to the
wolffd@0 382 same class.
wolffd@0 383 <p>
wolffd@0 384 By default,
wolffd@0 385 eclass1 = 1:ss, and eclass2 = (1:ss)+ss, where ss = slice size = the
wolffd@0 386 number of nodes per slice.
wolffd@0 387 This will tie nodes in slices 3, 4, ... to the the nodes in slice 2,
wolffd@0 388 but none of the nodes in slice 2 to any in slice 1.
wolffd@0 389 By using the above tieing pattern,
wolffd@0 390 we now only have 20 CPDs to specify, instead of 24:
wolffd@0 391 <pre>
wolffd@0 392 bnet = mk_dbn(intra, inter, ns, dnodes, eclass1, eclass2);
wolffd@0 393 for e=1:max(eclass)
wolffd@0 394 bnet.CPD{e} = tabular_CPD(bnet, e);
wolffd@0 395 end
wolffd@0 396 </pre>
wolffd@0 397 -->
wolffd@0 398
wolffd@0 399
wolffd@0 400
wolffd@0 401 <h2><a name="bat">BATnet</h2>
wolffd@0 402
wolffd@0 403 As an example of a more complicated DBN, consider the following
wolffd@0 404 example,
wolffd@0 405 which is a model of a car's high level state, as might be used by
wolffd@0 406 an automated car.
wolffd@0 407 (The model is from Forbes, Huang, Kanazawa and Russell, "The BATmobile: Towards a
wolffd@0 408 Bayesian Automated Taxi", IJCAI 95. The figure is from
wolffd@0 409 Boyen and Koller, "Tractable Inference for Complex Stochastic
wolffd@0 410 Processes", UAI98.
wolffd@0 411 For simplicity, we only show the observed nodes for slice 2.)
wolffd@0 412 <p>
wolffd@0 413 <center>
wolffd@0 414 <IMG SRC="Figures/batnet.gif">
wolffd@0 415 </center>
wolffd@0 416 <p>
wolffd@0 417 Since this topology is so complicated,
wolffd@0 418 it is useful to be able to refer to the nodes by name, instead of
wolffd@0 419 number.
wolffd@0 420 <pre>
wolffd@0 421 names = {'LeftClr', 'RightClr', 'LatAct', ... 'Bclr', 'BYdotDiff'};
wolffd@0 422 ss = length(names);
wolffd@0 423 </pre>
wolffd@0 424 We can specify the intra-slice topology using a cell array as follows,
wolffd@0 425 where each row specifies a connection between two named nodes:
wolffd@0 426 <pre>
wolffd@0 427 intrac = {...
wolffd@0 428 'LeftClr', 'LeftClrSens';
wolffd@0 429 'RightClr', 'RightClrSens';
wolffd@0 430 ...
wolffd@0 431 'BYdotDiff', 'BcloseFast'};
wolffd@0 432 </pre>
wolffd@0 433 Finally, we can convert this cell array to an adjacency matrix using
wolffd@0 434 the following function:
wolffd@0 435 <pre>
wolffd@0 436 [intra, names] = mk_adj_mat(intrac, names, 1);
wolffd@0 437 </pre>
wolffd@0 438 This function also permutes the names so that they are in topological
wolffd@0 439 order.
wolffd@0 440 Given this ordering of the names, we can make the inter-slice
wolffd@0 441 connectivity matrix as follows:
wolffd@0 442 <pre>
wolffd@0 443 interc = {...
wolffd@0 444 'LeftClr', 'LeftClr';
wolffd@0 445 'LeftClr', 'LatAct';
wolffd@0 446 ...
wolffd@0 447 'FBStatus', 'LatAct'};
wolffd@0 448
wolffd@0 449 inter = mk_adj_mat(interc, names, 0);
wolffd@0 450 </pre>
wolffd@0 451
wolffd@0 452 To refer to a node, we must know its number, which can be computed as
wolffd@0 453 in the following example:
wolffd@0 454 <pre>
wolffd@0 455 obs = {'LeftClrSens', 'RightClrSens', 'TurnSignalSens', 'XdotSens', 'YdotSens', 'FYdotDiffSens', ...
wolffd@0 456 'FclrSens', 'BXdotSens', 'BclrSens', 'BYdotDiffSens'};
wolffd@0 457 for i=1:length(obs)
wolffd@0 458 onodes(i) = strmatch(obs{i}, names);
wolffd@0 459 end
wolffd@0 460 onodes = sort(onodes);
wolffd@0 461 </pre>
wolffd@0 462 (We sort the onodes since most BNT routines assume that set-valued
wolffd@0 463 arguments are in sorted order.)
wolffd@0 464 We can now make the DBN:
wolffd@0 465 <pre>
wolffd@0 466 dnodes = 1:ss;
wolffd@0 467 ns = 2*ones(1,ss); % binary nodes
wolffd@0 468 bnet = mk_dbn(intra, inter, ns, 'iscrete', dnodes);
wolffd@0 469 </pre>
wolffd@0 470 To specify the parameters, we must know the order of the parents.
wolffd@0 471 See the function BNT/general/mk_named_CPT for a way to do this in the
wolffd@0 472 case of tabular nodes. For simplicity, we just generate random
wolffd@0 473 parameters:
wolffd@0 474 <pre>
wolffd@0 475 for i=1:2*ss
wolffd@0 476 bnet.CPD{i} = tabular_CPD(bnet, i);
wolffd@0 477 end
wolffd@0 478 </pre>
wolffd@0 479 A complete version of this example is available in BNT/examples/dynamic/bat1.m.
wolffd@0 480
wolffd@0 481
wolffd@0 482
wolffd@0 483
wolffd@0 484 <h1><a name="inf">Inference</h1>
wolffd@0 485
wolffd@0 486
wolffd@0 487 The general inference problem for DBNs is to compute
wolffd@0 488 P(X(i,t0) | Y(:, t1:t2)), where X(i,t) represents the i'th hidden
wolffd@0 489 variable at time t and Y(:,t1:t2) represents all the evidence
wolffd@0 490 between times t1 and t2.
wolffd@0 491 There are several special cases of interest, illustrated below.
wolffd@0 492 The arrow indicates t0: it is X(t0) that we are trying to estimate.
wolffd@0 493 The shaded region denotes t1:t2, the available data.
wolffd@0 494 <p>
wolffd@0 495
wolffd@0 496 <img src="Figures/filter.gif">
wolffd@0 497
wolffd@0 498 <p>
wolffd@0 499 BNT can currently only handle offline smoothing.
wolffd@0 500 (The HMM engine handles filtering and, to a limited extent, prediction.)
wolffd@0 501 The usage is similar to static
wolffd@0 502 inference engines, except now the evidence is a 2D cell array of
wolffd@0 503 size ss*T, where ss is the number of nodes per slice (ss = slice sizee) and T is the
wolffd@0 504 number of slices.
wolffd@0 505 Also, 'marginal_nodes' takes two arguments, the nodes and the time-slice.
wolffd@0 506 For example, to compute P(X(i,t) | y(:,1:T)), we proceed as follows
wolffd@0 507 (where onodes are the indices of the observedd nodes in each slice,
wolffd@0 508 which correspond to y):
wolffd@0 509 <pre>
wolffd@0 510 ev = sample_dbn(bnet, T);
wolffd@0 511 evidence = cell(ss,T);
wolffd@0 512 evidence(onodes,:) = ev(onodes, :); % all cells besides onodes are empty
wolffd@0 513 [engine, ll] = enter_evidence(engine, evidence);
wolffd@0 514 marg = marginal_nodes(engine, i, t);
wolffd@0 515 </pre>
wolffd@0 516
wolffd@0 517
wolffd@0 518 <h2><a name="discrete">Discrete hidden nodes</h2>
wolffd@0 519
wolffd@0 520 If all the hidden nodes are discrete,
wolffd@0 521 we can use the junction tree algorithm to perform inference.
wolffd@0 522 The simplest approach,
wolffd@0 523 <tt>jtree_unrolled_dbn_inf_engine</tt>,
wolffd@0 524 unrolls the DBN into a static network and applies jtree; however, for
wolffd@0 525 long sequences, this
wolffd@0 526 can be very slow and can result in numerical underflow.
wolffd@0 527 A better approach is to apply the jtree algorithm to pairs of
wolffd@0 528 neighboring slices at a time; this is implemented in
wolffd@0 529 <tt>jtree_dbn_inf_engine</tt>.
wolffd@0 530
wolffd@0 531 <p>
wolffd@0 532 A DBN can be converted to an HMM if all the hidden nodes are discrete.
wolffd@0 533 In this case, you can use
wolffd@0 534 <tt>hmm_inf_engine</tt>. This is faster than jtree for small models
wolffd@0 535 because the constant factors of the algorithm are lower, but can be
wolffd@0 536 exponentially slower for models with many variables
wolffd@0 537 (e.g., > 6 binary hidden nodes).
wolffd@0 538
wolffd@0 539 <p>
wolffd@0 540 The use of both
wolffd@0 541 <tt>jtree_dbn_inf_engine</tt>
wolffd@0 542 and
wolffd@0 543 <tt>hmm_inf_engine</tt>
wolffd@0 544 is deprecated.
wolffd@0 545 A better approach is to construct a smoother engine out of lower-level
wolffd@0 546 engines, which implement forward/backward operators.
wolffd@0 547 You can create these engines as follows.
wolffd@0 548 <pre>
wolffd@0 549 engine = smoother_engine(hmm_2TBN_inf_engine(bnet));
wolffd@0 550 or
wolffd@0 551 engine = smoother_engine(jtree_2TBN_inf_engine(bnet));
wolffd@0 552 </pre>
wolffd@0 553 You then call them in the usual way:
wolffd@0 554 <pre>
wolffd@0 555 engine = enter_evidence(engine, evidence);
wolffd@0 556 m = marginal_nodes(engine, nodes, t);
wolffd@0 557 </pre>
wolffd@0 558 Note: you must declare the observed nodes in the bnet before using
wolffd@0 559 hmm_2TBN_inf_engine.
wolffd@0 560
wolffd@0 561
wolffd@0 562 <p>
wolffd@0 563 Unfortunately, when all the hiddden nodes are discrete,
wolffd@0 564 exact inference takes O(2^n) time, where n is the number of hidden
wolffd@0 565 nodes per slice,
wolffd@0 566 even if the model is sparse.
wolffd@0 567 The basic reason for this is that two nodes become correlated, even if
wolffd@0 568 there is no direct connection between them in the 2TBN,
wolffd@0 569 by virtue of sharing common ancestors in the past.
wolffd@0 570 Hence we need to use approximations.
wolffd@0 571 <p>
wolffd@0 572 A popular approximate inference algorithm for discrete DBNs, known as BK, is described in
wolffd@0 573 <ul>
wolffd@0 574 <li>
wolffd@0 575 <A HREF="http://robotics.Stanford.EDU/~xb/uai98/index.html">
wolffd@0 576 Tractable inference for complex stochastic processes </A>,
wolffd@0 577 Boyen and Koller, UAI 1998
wolffd@0 578 <li>
wolffd@0 579 <A HREF="http://robotics.Stanford.EDU/~xb/nips98/index.html">
wolffd@0 580 Approximate learning of dynamic models</a>, Boyen and Koller, NIPS
wolffd@0 581 1998.
wolffd@0 582 </ul>
wolffd@0 583 This approximates the belief state with a product of
wolffd@0 584 marginals on a specified set of clusters. For example,
wolffd@0 585 in the water network, we might use the following clusters:
wolffd@0 586 <pre>
wolffd@0 587 engine = bk_inf_engine(bnet, { [1 2], [3 4 5 6], [7 8] });
wolffd@0 588 </pre>
wolffd@0 589 This engine can now be used just like the jtree engine.
wolffd@0 590 Two special cases of the BK algorithm are supported: 'ff' (fully
wolffd@0 591 factored) means each node has its own cluster, and 'exact' means there
wolffd@0 592 is 1 cluster that contains the whole slice. These can be created as
wolffd@0 593 follows:
wolffd@0 594 <pre>
wolffd@0 595 engine = bk_inf_engine(bnet, 'ff');
wolffd@0 596 engine = bk_inf_engine(bnet, 'exact');
wolffd@0 597 </pre>
wolffd@0 598 For pedagogical purposes, an implementation of BK-FF that uses an HMM
wolffd@0 599 instead of junction tree is available at
wolffd@0 600 <tt>bk_ff_hmm_inf_engine</tt>.
wolffd@0 601
wolffd@0 602
wolffd@0 603
wolffd@0 604 <h2><a name="cts">Continuous hidden nodes</h2>
wolffd@0 605
wolffd@0 606 If all the hidden nodes are linear-Gaussian, <em>and</em> the observed nodes are
wolffd@0 607 linear-Gaussian,
wolffd@0 608 the model is a <a href="http://www.cs.berkeley.edu/~murphyk/Bayes/kalman.html">
wolffd@0 609 linear dynamical system</a> (LDS).
wolffd@0 610 A DBN can be converted to an LDS if all the hidden nodes are linear-Gaussian
wolffd@0 611 and if they are all persistent. In this case, you can use
wolffd@0 612 <tt>kalman_inf_engine</tt>.
wolffd@0 613 For more general linear-gaussian models, you can use
wolffd@0 614 <tt>jtree_dbn_inf_engine</tt> or <tt>jtree_unrolled_dbn_inf_engine</tt>.
wolffd@0 615
wolffd@0 616 <p>
wolffd@0 617 For nonlinear systems with Gaussian noise, the unscented Kalman filter (UKF),
wolffd@0 618 due to Julier and Uhlmann, is far superior to the well-known extended Kalman
wolffd@0 619 filter (EKF), both in theory and practice.
wolffd@0 620 <!--
wolffd@0 621 See
wolffd@0 622 <A HREF="http://phoebe.robots.ox.ac.uk/default.html">"A General Method for
wolffd@0 623 Approximating Nonlinear Transformations of
wolffd@0 624 Probability Distributions"</A>.
wolffd@0 625 (If the above link is down,
wolffd@0 626 try <a href="http://www.ece.ogi.edu/~ericwan/pubs.html">Eric Wan's</a>
wolffd@0 627 page, who has done a lot of work on the UKF.)
wolffd@0 628 <p>
wolffd@0 629 -->
wolffd@0 630 The key idea of the UKF is that it is easier to estimate a Gaussian distribution
wolffd@0 631 from a set of points than to approximate an arbitrary non-linear
wolffd@0 632 function.
wolffd@0 633 We start with points that are plus/minus sigma away from the mean along
wolffd@0 634 each dimension, and then pipe them through the nonlinearity, and
wolffd@0 635 then fit a Gaussian to the transformed points.
wolffd@0 636 (No need to compute Jacobians, unlike the EKF!)
wolffd@0 637
wolffd@0 638 <p>
wolffd@0 639 For systems with non-Gaussian noise, I recommend
wolffd@0 640 <a href="http://www.cs.berkeley.edu/~jfgf/smc/">Particle
wolffd@0 641 filtering</a> (PF), which is a popular sequential Monte Carlo technique.
wolffd@0 642
wolffd@0 643 <p>
wolffd@0 644 The EKF can be used as a proposal distribution for a PF.
wolffd@0 645 This method is better than either one alone.
wolffd@0 646 See <a href="http://www.cs.berkeley.edu/~jfgf/upf.ps.gz">The Unscented Particle Filter</a>,
wolffd@0 647 by R van der Merwe, A Doucet, JFG de Freitas and E Wan, May 2000.
wolffd@0 648 <a href="http://www.cs.berkeley.edu/~jfgf/software.html">Matlab
wolffd@0 649 software</a> for the UPF is also available.
wolffd@0 650 <p>
wolffd@0 651 Note: none of this software is part of BNT.
wolffd@0 652
wolffd@0 653
wolffd@0 654
wolffd@0 655 <h1><a name="learn">Learning</h1>
wolffd@0 656
wolffd@0 657 Learning in DBNs can be done online or offline.
wolffd@0 658 Currently only offline learning is implemented in BNT.
wolffd@0 659
wolffd@0 660
wolffd@0 661 <h2><a name="param_learn">Parameter learning</h2>
wolffd@0 662
wolffd@0 663 Offline parameter learning is very similar to learning in static networks,
wolffd@0 664 except now the training data is a cell-array of 2D cell-arrays.
wolffd@0 665 For example,
wolffd@0 666 cases{l}{i,t} is the value of node i in slice t in sequence l, or []
wolffd@0 667 if unobserved.
wolffd@0 668 Each sequence can be a different length, and may have missing values
wolffd@0 669 in arbitrary locations.
wolffd@0 670 Here is a typical code fragment for using EM.
wolffd@0 671 <pre>
wolffd@0 672 ncases = 2;
wolffd@0 673 cases = cell(1, ncases);
wolffd@0 674 for i=1:ncases
wolffd@0 675 ev = sample_dbn(bnet, T);
wolffd@0 676 cases{i} = cell(ss,T);
wolffd@0 677 cases{i}(onodes,:) = ev(onodes, :);
wolffd@0 678 end
wolffd@0 679 [bnet2, LLtrace] = learn_params_dbn_em(engine, cases, 'max_iter', 10);
wolffd@0 680 </pre>
wolffd@0 681 If the observed node is vector-valued and stored in an OxT array, you
wolffd@0 682 need to assign each vector to a single cell, as in the following
wolffd@0 683 example.
wolffd@0 684 <pre>
wolffd@0 685 data = [xpos(:)'; ypos(:)'];
wolffd@0 686 ncases = 1;
wolffd@0 687 cases = cell(1, ncases);
wolffd@0 688 onodes = bnet.observed;
wolffd@0 689 for i=1:ncases
wolffd@0 690 cases{i} = cell(ss,T);
wolffd@0 691 cases{i}(onodes,:) = num2cell(data(:,1:T), 1);
wolffd@0 692 end
wolffd@0 693 </pre>
wolffd@0 694 <p>
wolffd@0 695 For a complete code listing of how to do EM in a simple DBN, click
wolffd@0 696 <a href="dbn_hmm_demo.m">here</a>.
wolffd@0 697
wolffd@0 698 <h2><a name="struct_learn">Structure learning</h2>
wolffd@0 699
wolffd@0 700 There is currently only one structure learning algorithm for DBNs.
wolffd@0 701 This assumes all nodes are tabular and observed, and that there are
wolffd@0 702 no intra-slice connections. Hence we can find the optimal set of
wolffd@0 703 parents for each node separately, without worrying about directed
wolffd@0 704 cycles or node orderings.
wolffd@0 705 The function is called as follows
wolffd@0 706 <pre>
wolffd@0 707 inter = learn_struct_dbn_reveal(cases, ns, max_fan_in, penalty)
wolffd@0 708 </pre>
wolffd@0 709 A full example is given in BNT/examples/dynamic/reveal1.m.
wolffd@0 710 Setting the penalty term to 0 gives the maximum likelihood model; this
wolffd@0 711 is equivalent to maximizing the mutual information between parents and
wolffd@0 712 child (in the bioinformatics community, this is known as the REVEAL
wolffd@0 713 algorithm). A non-zero penalty invokes the BIC criterion, which
wolffd@0 714 lessens the chance of overfitting.
wolffd@0 715 <p>
wolffd@0 716 <a href="http://www.bioss.sari.ac.uk/~dirk/software/DBmcmc/">
wolffd@0 717 Dirk Husmeier has extended MCMC model selection to DBNs</a>.
wolffd@0 718
wolffd@0 719 </BODY>