Mercurial > hg > camir-aes2014
comparison toolboxes/FullBNT-1.0.7/docs/changelog.html @ 0:e9a9cd732c1e tip
first hg version after svn
| author | wolffd |
|---|---|
| date | Tue, 10 Feb 2015 15:05:51 +0000 |
| parents | |
| children |
comparison
equal
deleted
inserted
replaced
| -1:000000000000 | 0:e9a9cd732c1e |
|---|---|
| 1 <title>History of changes to BNT</title> | |
| 2 <h1>History of changes to BNT</h1> | |
| 3 | |
| 4 | |
| 5 <h2>Changes since 4 Oct 2007</h2> | |
| 6 | |
| 7 <pre> | |
| 8 - 19 Oct 07 murphyk | |
| 9 | |
| 10 * BNT\CPDs\@noisyor_CPD\CPD_to_CPT.m: 2nd half of the file is a repeat | |
| 11 of the first half and was deleted (thanks to Karl Kuschner) | |
| 12 | |
| 13 * KPMtools\myismember.m should return logical for use in "assert" so add line at end | |
| 14 p=logical(p); this prevents "assert" from failing on an integer input. | |
| 15 (thanks to Karl Kuschner) | |
| 16 | |
| 17 | |
| 18 | |
| 19 - 17 Oct 07 murphyk | |
| 20 | |
| 21 * Updated subv2ind and ind2subv in KPMtools to Tom Minka's implementation. | |
| 22 His ind2subv is faster (vectorized), but I had to modify it so it | |
| 23 matched the behavior of my version when called with siz=[]. | |
| 24 His subv2inv is slightly simpler than mine because he does not treat | |
| 25 the siz=[2 2 ... 2] case separately. | |
| 26 Note: there is now no need to ever use the C versions of these | |
| 27 functions (or any others, for that matter). | |
| 28 | |
| 29 * removed BNT/add_BNT_to_path since no longer needed. | |
| 30 | |
| 31 | |
| 32 | |
| 33 - 4 Oct 07 murphyk | |
| 34 | |
| 35 * moved code from sourceforge to UBC website, made version 1.0.4 | |
| 36 | |
| 37 * @pearl_inf_engine/pearl_inf_engine line 24, default | |
| 38 argument for protocol changed from [] to 'parallel'. | |
| 39 Also, changed private/parallel_protocol so it doesn't write to an | |
| 40 empty file id (Matlab 7 issue) | |
| 41 | |
| 42 * added foptions (Matlab 7 issue) | |
| 43 | |
| 44 * changed genpathKPM to exclude svn. Put it in toplevel directory to | |
| 45 massively simplify the installation process. | |
| 46 | |
| 47 </pre> | |
| 48 | |
| 49 | |
| 50 <h2>Sourceforge changelog</h2> | |
| 51 | |
| 52 BNT was first ported to sourceforge on 28 July 2001 by yozhik. | |
| 53 BNT was removed from sourceforge on 4 October 2007 by Kevin Murphy; | |
| 54 that version is cached as <a | |
| 55 href="FullBNT-1.0.3.zip">FullBNT-1.0.3.zip</a>. | |
| 56 See <a href="ChangeLog.Sourceforge.txt">Changelog from | |
| 57 sourceforge</a> for a history of that version of the code, | |
| 58 which formed the basis of the branch currently on Murphy's web page. | |
| 59 | |
| 60 | |
| 61 <h2> Changes from August 1998 -- July 2004</h2> | |
| 62 | |
| 63 Kevin Murphy made the following changes to his own private copy. | |
| 64 (Other small changes were made between July 2004 and October 2007, but were | |
| 65 not documented.) | |
| 66 These may or may not be reflected in the sourceforge version of the | |
| 67 code (which was independently maintained). | |
| 68 | |
| 69 | |
| 70 <ul> | |
| 71 <li> 9 June 2004 | |
| 72 <ul> | |
| 73 <li> Changed tabular_CPD/learn_params back to old syntax, to make it | |
| 74 compatible with gaussian_CPD/learn_params (and re-enabled | |
| 75 generic_CPD/learn_params). | |
| 76 Modified learning/learn_params.m and learning/score_family | |
| 77 appropriately. | |
| 78 (In particular, I undid the change Sonia Leach had to make to | |
| 79 score_family to handle this asymmetry.) | |
| 80 Added examples/static/gaussian2 to test this new functionality. | |
| 81 | |
| 82 <li> Added bp_mrf2 (for generic pairwise MRFs) to | |
| 83 inference/static/@bp_belprop_mrf2_inf_engine. [MRFs are not | |
| 84 "officially" supported in BNT, so this code is just for expert | |
| 85 hackers.] | |
| 86 | |
| 87 <li> Added examples/static/nodeorderExample.m to illustrate importance | |
| 88 of using topological ordering. | |
| 89 | |
| 90 <li> Ran dos2unix on all *.c files within BNT to eliminate compiler | |
| 91 warnings. | |
| 92 | |
| 93 </ul> | |
| 94 | |
| 95 <li> 7 June 2004 | |
| 96 <ul> | |
| 97 <li> Replaced normaliseC with normalise in HMM/fwdback, for maximum | |
| 98 portability (and negligible loss in speed). | |
| 99 <li> Ensured FullBNT versions of HMM, KPMstats etc were as up-to-date | |
| 100 as stand-alone versions. | |
| 101 <li> Changed add_BNT_to_path so it no longer uses addpath(genpath()), | |
| 102 which caused Old versions of files to mask new ones. | |
| 103 </ul> | |
| 104 | |
| 105 <li> 18 February 2004 | |
| 106 <ul> | |
| 107 <li> A few small bug fixes to BNT, as posted to the Yahoo group. | |
| 108 <li> Several new functions added to KPMtools, KPMstats and Graphviz | |
| 109 (none needed by BNT). | |
| 110 <li> Added CVS to some of my toolboxes. | |
| 111 </ul> | |
| 112 | |
| 113 <li> 30 July 2003 | |
| 114 <ul> | |
| 115 <li> qian.diao fixed @mpot/set_domain_pot and @cgpot/set_domain_pot | |
| 116 <li> Marco Grzegorczyk found, and Sonia Leach fixed, a bug in | |
| 117 do_removal inside learn_struct_mcmc | |
| 118 </ul> | |
| 119 | |
| 120 | |
| 121 <li> 28 July 2003 | |
| 122 <ul> | |
| 123 <li> Sebastian Luehr provided 2 minor bug fixes, to HMM/fwdback (if any(scale==0)) | |
| 124 and FullBNT\HMM\CPDs\@hhmmQ_CPD\update_ess.m (wrong transpose). | |
| 125 </ul> | |
| 126 | |
| 127 <li> 8 July 2003 | |
| 128 <ul> | |
| 129 <li> Removed buggy BNT/examples/static/MRF2/Old/mk_2D_lattice.m which was | |
| 130 masking correct graph/mk_2D_lattice. | |
| 131 <li> Fixed bug in graph/mk_2D_lattice_slow in the non-wrap-around case | |
| 132 (line 78) | |
| 133 </ul> | |
| 134 | |
| 135 | |
| 136 <li> 2 July 2003 | |
| 137 <ul> | |
| 138 <li> Sped up normalize(., 1) in KPMtools by avoiding general repmat | |
| 139 <li> Added assign_cols and marginalize_table to KPMtools | |
| 140 </ul> | |
| 141 | |
| 142 | |
| 143 <li> 29 May 2003 | |
| 144 <ul> | |
| 145 <li> Modified KPMstats/mixgauss_Mstep so it repmats Sigma in the tied | |
| 146 covariance case (bug found by galt@media.mit.edu). | |
| 147 | |
| 148 <li> Bob Welch found bug in gaussian_CPDs/maximize_params in the way | |
| 149 cpsz was computed. | |
| 150 | |
| 151 <li> Added KPMstats/mixgauss_em, because my code is easier to | |
| 152 understand/modify than netlab's (at least for me!). | |
| 153 | |
| 154 <li> Modified BNT/examples/dynamic/viterbi1 to call multinomial_prob | |
| 155 instead of mk_dhmm_obs_lik. | |
| 156 | |
| 157 <li> Moved parzen window and partitioned models code to KPMstats. | |
| 158 | |
| 159 <li> Rainer Deventer fixed some bugs in his scgpot code, as follows: | |
| 160 1. complement_pot.m | |
| 161 Problems occured for probabilities equal to zero. The result is an | |
| 162 division by zero error. | |
| 163 <br> | |
| 164 2. normalize_pot.m | |
| 165 This function is used during the calculation of the log-likelihood. | |
| 166 For a probability of zero a warning "log of zero" occurs. I have not | |
| 167 realy fixed the bug. As a workaround I suggest to calculate the | |
| 168 likelihhod based on realmin (the smallest real number) instead of | |
| 169 zero. | |
| 170 <br> | |
| 171 3. recursive_combine_pots | |
| 172 At the beginning of the function there was no test for the trivial case, | |
| 173 which defines the combination of two potentials as equal to the direct | |
| 174 combination. The result might be an infinite recursion which leads to | |
| 175 a stack overflow in matlab. | |
| 176 </ul> | |
| 177 | |
| 178 | |
| 179 | |
| 180 <li> 11 May 2003 | |
| 181 <ul> | |
| 182 <li> Fixed bug in gaussian_CPD/maximize_params so it is compatible | |
| 183 with the new clg_Mstep routine | |
| 184 <li> Modified KPMstats/cwr_em to handle single cluster case | |
| 185 separately. | |
| 186 <li> Fixed bug in netlab/gmminit. | |
| 187 <li> Added hash tables to KPMtools. | |
| 188 </ul> | |
| 189 | |
| 190 | |
| 191 <li> 4 May 2003 | |
| 192 <ul> | |
| 193 <li> | |
| 194 Renamed many functions in KPMstats so the name of the | |
| 195 distribution/model type comes first, | |
| 196 Mstep_clg -> clg_Mstep, | |
| 197 Mstep_cond_gauss -> mixgauss_Mstep. | |
| 198 Also, renamed eval_pdf_xxx functions to xxx_prob, e.g. | |
| 199 eval_pdf_cond_mixgauss -> mixgauss_prob. | |
| 200 This is simpler and shorter. | |
| 201 | |
| 202 <li> | |
| 203 Renamed many functions in HMM toolbox so the name of the | |
| 204 distribution/model type comes first, | |
| 205 log_lik_mhmm -> mhmm_logprob, etc. | |
| 206 mk_arhmm_obs_lik has finally been re-implemented in terms of clg_prob | |
| 207 and mixgauss_prob (for slice 1). | |
| 208 Removed the Demos directory, and put them in the main directory. | |
| 209 This code is not backwards compatible. | |
| 210 | |
| 211 <li> Removed some of the my_xxx functions from KPMstats (these were | |
| 212 mostly copies of functions from the Mathworks stats toolbox). | |
| 213 | |
| 214 | |
| 215 <li> Modified BNT to take into account changes to KPMstats and | |
| 216 HMM toolboxes. | |
| 217 | |
| 218 <li> Fixed KPMstats/Mstep_clg (now called clg_Mstep) for spherical Gaussian case. | |
| 219 (Trace was wrongly parenthesised, and I used YY instead of YTY. | |
| 220 The spherical case now gives the same result as the full case | |
| 221 for cwr_demo.) | |
| 222 Also, mixgauss_Mstep now adds 0.01 to the ML estimate of Sigma, | |
| 223 to act as a regularizer (it used to add 0.01 to E[YY'], but this was | |
| 224 ignored in the spherical case). | |
| 225 | |
| 226 <li> Added cluster weighted regression to KPMstats. | |
| 227 | |
| 228 <li> Added KPMtools/strmatch_substr. | |
| 229 </ul> | |
| 230 | |
| 231 | |
| 232 | |
| 233 <li> 28 Mar 03 | |
| 234 <ul> | |
| 235 <li> Added mc_stat_distrib and eval_pdf_cond_prod_parzen to KPMstats | |
| 236 <li> Fixed GraphViz/arrow.m incompatibility with matlab 6.5 | |
| 237 (replace all NaN's with 0). | |
| 238 Modified GraphViz/graph_to_dot so it also works on windows. | |
| 239 <li> I removed dag_to_jtree and added graph_to_jtree to the graph | |
| 240 toolbox; the latter expects an undirected graph as input. | |
| 241 <li> I added triangulate_2Dlattice_demo.m to graph. | |
| 242 <li> Rainer Deventer fixed the stable conditional Gaussian potential | |
| 243 classes (scgpot and scgcpot) and inference engine | |
| 244 (stab_cond_gauss_inf_engine). | |
| 245 <li> Rainer Deventer added (stable) higher-order Markov models (see | |
| 246 inference/dynamic/@stable_ho_inf_engine). | |
| 247 </ul> | |
| 248 | |
| 249 | |
| 250 <li> 14 Feb 03 | |
| 251 <ul> | |
| 252 <li> Simplified learning/learn_params so it no longer returns BIC | |
| 253 score. Also, simplified @tabular_CPD/learn_params so it only takes | |
| 254 local evidence. | |
| 255 Added learn_params_dbn, which does ML estimation of fully observed | |
| 256 DBNs. | |
| 257 <li> Vectorized KPMstats/eval_pdf_cond_mixgauss for tied Sigma | |
| 258 case (much faster!). | |
| 259 Also, now works in log-domain to prevent underflow. | |
| 260 eval_pdf_mixgauss now calls eval_pdf_cond_mixgauss and inherits these benefits. | |
| 261 <li> add_BNT_to_path now calls genpath with 2 arguments if using | |
| 262 matlab version 5. | |
| 263 </ul> | |
| 264 | |
| 265 | |
| 266 <li> 30 Jan 03 | |
| 267 <ul> | |
| 268 <li> Vectorized KPMstats/eval_pdf_cond_mixgauss for scalar Sigma | |
| 269 case (much faster!) | |
| 270 <li> Renamed mk_dotfile_from_hmm to draw_hmm and moved it to the | |
| 271 GraphViz library. | |
| 272 <li> Rewrote @gaussian_CPD/maximize_params.m so it calls | |
| 273 KPMstats/Mstep_clg. | |
| 274 This fixes bug when using clamped means (found by Rainer Deventer | |
| 275 and Victor Eruhimov) | |
| 276 and a bug when using a Wishart prior (no gamma term in the denominator). | |
| 277 It is also easier to read. | |
| 278 I rewrote the technical report re-deriving all the equations in a | |
| 279 clearer notation, making the solution to the bugs more obvious. | |
| 280 (See www.ai.mit.edu/~murphyk/Papers/learncg.pdf) | |
| 281 Modified Mstep_cond_gauss to handle priors. | |
| 282 <li> Fixed bug reported by Ramgopal Mettu in which add_BNT_to_path | |
| 283 calls genpath with only 1 argument, whereas version 5 requires 2. | |
| 284 <li> Fixed installC and uninstallC to search in FullBNT/BNT. | |
| 285 </ul> | |
| 286 | |
| 287 | |
| 288 <li> 24 Jan 03 | |
| 289 <ul> | |
| 290 <li> Major simplification of HMM code. | |
| 291 The API is not backwards compatible. | |
| 292 No new functionality has been added, however. | |
| 293 There is now only one fwdback function, instead of 7; | |
| 294 different behaviors are controlled through optional arguments. | |
| 295 I renamed 'evaluate observation likelihood' (local evidence) | |
| 296 to 'evaluate conditional pdf', since this is more general. | |
| 297 i.e., renamed | |
| 298 mk_dhmm_obs_lik to eval_pdf_cond_multinomial, | |
| 299 mk_ghmm_obs_lik to eval_pdf_cond_gauss, | |
| 300 mk_mhmm_obs_lik to eval_pdf_cond_mog. | |
| 301 These functions have been moved to KPMstats, | |
| 302 so they can be used by other toolboxes. | |
| 303 ghmm's have been eliminated, since they are just a special case of | |
| 304 mhmm's with M=1 mixture component. | |
| 305 mixgauss HMMs can now handle a different number of | |
| 306 mixture components per state. | |
| 307 init_mhmm has been eliminated, and replaced with init_cond_mixgauss | |
| 308 (in KPMstats) and mk_leftright/rightleft_transmat. | |
| 309 learn_dhmm can no longer handle inputs (although this is easy to add back). | |
| 310 </ul> | |
| 311 | |
| 312 | |
| 313 | |
| 314 | |
| 315 | |
| 316 <li> 20 Jan 03 | |
| 317 <ul> | |
| 318 <li> Added arrow.m to GraphViz directory, and commented out line 922, | |
| 319 in response to a bug report. | |
| 320 </ul> | |
| 321 | |
| 322 <li> 18 Jan 03 | |
| 323 <ul> | |
| 324 <li> Major restructuring of BNT file structure: | |
| 325 all code that is not specific to Bayes nets has been removed; | |
| 326 these packages must be downloaded separately. (Or just download FullBNT.) | |
| 327 This makes it easier to ensure different toolboxes are consistent. | |
| 328 misc has been slimmed down and renamed KPMtools, so it can be shared by other toolboxes, | |
| 329 such as HMM and Kalman; some of the code has been moved to BNT/general. | |
| 330 The Graphics directory has been slimmed down and renamed GraphViz. | |
| 331 The graph directory now has no dependence on BNT (dag_to_jtree has | |
| 332 been renamed graph_to_jtree and has a new API). | |
| 333 netlab2 no longer contains any netlab files, only netlab extensions. | |
| 334 None of the functionality has changed. | |
| 335 </ul> | |
| 336 | |
| 337 | |
| 338 | |
| 339 <li> 11 Jan 03 | |
| 340 <ul> | |
| 341 <li> jtree_dbn_inf_engine can now support soft evidence. | |
| 342 | |
| 343 <li> Rewrote graph/dfs to make it clearer. | |
| 344 Return arguments have changed, as has mk_rooted_tree. | |
| 345 The acyclicity check for large undirected graphs can cause a stack overflow. | |
| 346 It turns out that this was not a bug, but is because Matlab's stack depth | |
| 347 bound is very low by default. | |
| 348 | |
| 349 <li> Renamed examples/dynamic/filter2 to filter_test1, so it does not | |
| 350 conflict with the filter2 function in the image processing toolbox. | |
| 351 | |
| 352 <li> Ran test_BNT on various versions of matlab to check compatibility. | |
| 353 On matlab 6.5 (r13), elapsed time = 211s, cpu time = 204s. | |
| 354 On matlab 6.1 (r12), elapsed time = 173s, cpu time = 164s. | |
| 355 On matlab 5.3 (r11), elapsed time = 116s, cpu time = 114s. | |
| 356 So matlab is apparently getting slower with time!! | |
| 357 (All results were with a linux PIII machine.) | |
| 358 </ul> | |
| 359 | |
| 360 | |
| 361 <li> 14 Nov 02 | |
| 362 <ul> | |
| 363 <li> Removed all ndx inference routines, since they are only | |
| 364 marginally faster on toy problems, | |
| 365 and are slower on large problems due to having to store and lookup | |
| 366 the indices (causes cache misses). | |
| 367 In particular, I removed jtree_ndx_inf_eng and jtree_ndx_dbn_inf_eng, all the *ndx* | |
| 368 routines from potentials/Tables, and all the UID stuff from | |
| 369 add_BNT_to_path, | |
| 370 thus simplifying the code. | |
| 371 This required fixing hmm_(2TBN)_inf_engine/marginal_nodes\family, | |
| 372 and updating installC. | |
| 373 | |
| 374 | |
| 375 <li> Removed jtree_C_inf_engine and jtree_C_dbn_inf_engine. | |
| 376 The former is basically the same as using jtree_inf_engine with | |
| 377 mutliply_by_table.c and marginalize_table.c. | |
| 378 The latter benefited slightly by assuming potentials were tables | |
| 379 (arrays not objects), but these negligible savings don't justify the | |
| 380 complexity and code duplication. | |
| 381 | |
| 382 <li> Removed stab_cond_gauss_inf_engine and | |
| 383 scg_unrolled_dbn_inf_engine, | |
| 384 written by shan.huang@intel.com, since the code was buggy. | |
| 385 | |
| 386 <li> Removed potential_engine, which was only experimental anyway. | |
| 387 | |
| 388 </ul> | |
| 389 | |
| 390 | |
| 391 | |
| 392 <li> 13 Nov 02 | |
| 393 <ul> | |
| 394 <li> <b>Released version 5</b>. | |
| 395 The previous version, released on 7/28/02, is available | |
| 396 <a href="BNT4.zip">here</a>. | |
| 397 | |
| 398 <li> Moved code and documentation to MIT. | |
| 399 | |
| 400 <li> Added repmat.c from Thomas Minka's lightspeed library. | |
| 401 Modified it so it can return an empty matrix. | |
| 402 | |
| 403 <li> Tomas Kocka fixed bug in the BDeu option for tabular_CPD, | |
| 404 and contributed graph/dag_to_eg, to convert to essential graphs. | |
| 405 | |
| 406 <!--<li> Wrote a <a href="../Papers/fastmult.pdf">paper</a> which explains | |
| 407 the ndx methods and the ndx cache BNT uses for fast | |
| 408 multiplication/ marginalization of multi-dimensional arrays. | |
| 409 --> | |
| 410 | |
| 411 <li> Modified definition of hhmmQ_CPD, so that Qps can now accept | |
| 412 parents in either the current or previous slice. | |
| 413 | |
| 414 <li> Added hhmm2Q_CPD class, which is simpler than hhmmQ (no embedded | |
| 415 sub CPDs, etc), and which allows the conditioning parents, Qps, to | |
| 416 be before (in the topological ordering) the F or Q(t-1) nodes. | |
| 417 See BNT/examples/dynamic/HHMM/Map/mk_map_hhmm for an example. | |
| 418 </ul> | |
| 419 | |
| 420 | |
| 421 <li> 7/28/02 | |
| 422 <ul> | |
| 423 <li> Changed graph/best_first_elim_order from min-fill to min-weight. | |
| 424 <li> Ernest Chan fixed bug in Kalman/sample_lds (G{i} becomes G{m} in | |
| 425 line 61). | |
| 426 <li> Tal Blum <bloom@cs.huji.ac.il> fixed bug in HMM/init_ghmm (Q | |
| 427 becomes K, the number of states). | |
| 428 <li> Fixed jtree_2tbn_inf_engine/set_fields so it correctly sets the | |
| 429 maximize flag to 1 even in subengines. | |
| 430 <li> Gary Bradksi did a simple mod to the PC struct learn alg so you can pass it an | |
| 431 adjacency matrix as a constraint. Also, CovMat.m reads a file and | |
| 432 produces a covariance matrix. | |
| 433 <li> KNOWN BUG in CPDs/@hhmmQ_CPD/update_ess.m at line 72 caused by | |
| 434 examples/dynamic/HHMM/Square/learn_square_hhmm_cts.m at line 57. | |
| 435 <li> | |
| 436 The old version is available from www.cs.berkeley.edu/~murphyk/BNT.24june02.zip | |
| 437 </ul> | |
| 438 | |
| 439 | |
| 440 <li> 6/24/02 | |
| 441 <ul> | |
| 442 <li> Renamed dag_to_dot as graph_to_dot and added support for | |
| 443 undirected graphs. | |
| 444 <li> Changed syntax for HHMM CPD constructors: no need to specify d/D | |
| 445 anymore,so they can be used for more complex models. | |
| 446 <li> Removed redundant first argument to mk_isolated_tabular_CPD. | |
| 447 </ul> | |
| 448 | |
| 449 | |
| 450 <li> 6/19/02 | |
| 451 <ul> | |
| 452 <li> | |
| 453 Fixed most probable explanation code. | |
| 454 Replaced calc_mpe with find_mpe, which is now a method of certain | |
| 455 inference engines, e.g., jtree, belprop. | |
| 456 calc_mpe_global has become the find_mpe method of global_joint. | |
| 457 calc_mpe_bucket has become the find_mpe method of var_elim. | |
| 458 calc_mpe_dbn has become the find_mpe method of smoother. | |
| 459 These routines now correctly find the jointly most probable | |
| 460 explanation, instead of the marginally most probable assignments. | |
| 461 See examples/static/mpe1\mpe2 and examples/dynamic/viterbi1 | |
| 462 for examples. | |
| 463 Removed maximize flag from constructor and enter_evidence | |
| 464 methods, since this no longer needs to be specified by the user. | |
| 465 | |
| 466 <li> Rainer Deventer fixed in a bug in | |
| 467 CPDs/@gaussian_CPD/udpate_ess.m: | |
| 468 now, hidden_cps = any(hidden_bitv(cps)), whereas it used to be | |
| 469 hidden_cps = all(hidden_bitv(cps)). | |
| 470 | |
| 471 </ul> | |
| 472 | |
| 473 | |
| 474 <li> 5/29/02 | |
| 475 <ul> | |
| 476 <li> CPDs/@gaussian_CPD/udpate_ess.m fixed WX,WXX,WXY (thanks to Rainer Deventer and | |
| 477 Yohsuke Minowa for spotting the bug). Does the C version work?? | |
| 478 <li> potentials/@cpot/mpot_to_cpot fixed K==0 case (thanks to Rainer Deventer). | |
| 479 <li> CPDs/@gaussian_CPD/log_prob_node now accepts non-cell array data | |
| 480 on self (thanks to rishi <rishi@capsl.udel.edu> for catching this). | |
| 481 </ul> | |
| 482 | |
| 483 | |
| 484 <li> 5/19/02 | |
| 485 <ul> | |
| 486 | |
| 487 <!-- | |
| 488 <li> Finally added <a href="../Papers/wei_ndx.ps.gz">paper</a> by Wei Hu (written | |
| 489 November 2001) | |
| 490 describing ndxB, ndxD, and ndxSD. | |
| 491 --> | |
| 492 | |
| 493 <li> Wei Hu made the following changes. | |
| 494 <ul> | |
| 495 <li> Memory leak repair: | |
| 496 a. distribute_evidence.c in static/@jtree_C directory | |
| 497 b. distribute_evidence.c in static/@jtree_ndx directory | |
| 498 c. marg_tablec. in Tables dir | |
| 499 | |
| 500 <li> Add "@jtree_ndx_2TBN_inf_engine" in inference/online dir | |
| 501 | |
| 502 <li> Add "@jtree_sparse_inf_engine" in inference/static dir | |
| 503 | |
| 504 <li> Add "@jtree_sparse_2TBN_inf_engine" in inference/online dir | |
| 505 | |
| 506 <li> Modify "tabular_CPD.m" in CPDs/@tabular_CPD dir , used for sparse | |
| 507 | |
| 508 <li> In "@discrete_CPD" dir: | |
| 509 a. modify "convert_to_pot.m", used for sparse | |
| 510 b. add "convert_to_sparse_table.c" | |
| 511 | |
| 512 <li> In "potentials/@dpot" dir: | |
| 513 a. remove "divide_by_pot.c" and "multiply_by_pot.c" | |
| 514 b. add "divide_by_pot.m" and "multiply_by_pot.m" | |
| 515 c. modify "dpot.m", "marginalize_pot.m" and "normalize_pot.m" | |
| 516 | |
| 517 <li> In "potentials/Tables" dir: | |
| 518 a. modify mk_ndxB.c;(for speedup) | |
| 519 b. add "mult_by_table.m", | |
| 520 "divide_by_table.m", | |
| 521 "divide_by_table.c", | |
| 522 "marg_sparse_table.c", | |
| 523 "mult_by_sparse_table.c", | |
| 524 "divide_by_sparse_table.c". | |
| 525 | |
| 526 <li> Modify "normalise.c" in misc dir, used for sparse. | |
| 527 | |
| 528 <li>And, add discrete2, discrete3, filter2 and filter3 as test applications in test_BNT.m | |
| 529 Modify installC.m | |
| 530 </ul> | |
| 531 | |
| 532 <li> Kevin made the following changes related to strong junction | |
| 533 trees: | |
| 534 <ul> | |
| 535 <li> jtree_inf_engin line 75: | |
| 536 engine.root_clq = length(engine.cliques); | |
| 537 the last clq is guaranteed to be a strong root | |
| 538 | |
| 539 <li> dag_to_jtree line 38: [jtree, root, B, w] = | |
| 540 cliques_to_jtree(cliques, ns); | |
| 541 never call cliques_to_strong_jtree | |
| 542 | |
| 543 <li> strong_elim_order: use Ilya's code instead of topological sorting. | |
| 544 </ul> | |
| 545 | |
| 546 <li> Kevin fixed CPDs/@generic_CPD/learn_params, so it always passes | |
| 547 in the correct hidden_bitv field to update_params. | |
| 548 | |
| 549 </ul>. | |
| 550 | |
| 551 | |
| 552 <li> 5/8/02 | |
| 553 <ul> | |
| 554 | |
| 555 <li> Jerod Weinman helped fix some bugs in HHMMQ_CPD/maximize_params. | |
| 556 | |
| 557 <li> Removed broken online inference from hmm_inf_engine. | |
| 558 It has been replaced filter_inf_engine, which can take hmm_inf_engine | |
| 559 as an argument. | |
| 560 | |
| 561 <li> Changed graph visualization function names. | |
| 562 'draw_layout' is now 'draw_graph', | |
| 563 'draw_layout_dbn' is now 'draw_dbn', | |
| 564 'plotgraph' is now 'dag_to_dot', | |
| 565 'plothmm' is now 'hmm_to_dot', | |
| 566 added 'dbn_to_dot', | |
| 567 'mkdot' no longer exists': its functioality has been subsumed by dag_to_dot. | |
| 568 The dot functions now all take optional args in string/value format. | |
| 569 </ul> | |
| 570 | |
| 571 | |
| 572 <li> 4/1/02 | |
| 573 <ul> | |
| 574 <li> Added online inference classes. | |
| 575 See BNT/inference/online and BNT/examples/dynamic/filter1. | |
| 576 This is work in progress. | |
| 577 <li> Renamed cmp_inference to cmp_inference_dbn, and made its | |
| 578 interface and behavior more similar to cmp_inference_static. | |
| 579 <li> Added field rep_of_eclass to bnet and dbn, to simplify | |
| 580 parameter tying (see ~murphyk/Bayes/param_tieing.html). | |
| 581 <li> Added gmux_CPD (Gaussian mulitplexers). | |
| 582 See BNT/examples/dynamic/SLAM/skf_data_assoc_gmux for an example. | |
| 583 <li> Modified the forwards sampling routines. | |
| 584 general/sample_dbn and sample_bnet now take optional arguments as | |
| 585 strings, and can sample with pre-specified evidence. | |
| 586 sample_bnet can only generate a single sample, and it is always a cell | |
| 587 array. | |
| 588 sample_node can only generate a single sample, and it is always a | |
| 589 scalar or vector. | |
| 590 This eliminates the false impression that the function was | |
| 591 ever vectorized (which was only true for tabular_CPDs). | |
| 592 (Calling sample_bnet inside a for-loop is unlikely to be a bottleneck.) | |
| 593 <li> Updated usage.html's description of CPDs (gmux) and inference | |
| 594 (added gibbs_sampling and modified the description of pearl). | |
| 595 <li> Modified BNT/Kalman/kalman_filter\smoother so they now optionally | |
| 596 take an observed input (control) sequence. | |
| 597 Also, optional arguments are now passed as strings. | |
| 598 <li> Removed BNT/examples/static/uci_data to save space. | |
| 599 </ul> | |
| 600 | |
| 601 <li> 3/14/02 | |
| 602 <ul> | |
| 603 <li> pearl_inf_engine now works for (vector) Gaussian nodes, as well | |
| 604 as discrete. compute_pi has been renamed CPD_to_pi. compute_lambda_msg | |
| 605 has been renamed CPD_to_lambda_msg. These are now implemented for | |
| 606 the discrete_CPD class instead of tabular_CPD. noisyor and | |
| 607 Gaussian have their own private implemenations. | |
| 608 Created examples/static/Belprop subdirectory. | |
| 609 <li> Added examples/dynamic/HHMM/Motif. | |
| 610 <li> Added Matt Brand's entropic prior code. | |
| 611 <li> cmp_inference_static has changed. It no longer returns err. It | |
| 612 can check for convergence. It can accept 'observed'. | |
| 613 </ul> | |
| 614 | |
| 615 | |
| 616 <li> 3/4/02 | |
| 617 <ul> | |
| 618 <li> Fixed HHMM code. Now BNT/examples/dynamic/HHMM/mk_abcd_hhmm | |
| 619 implements the example in the NIPS paper. See also | |
| 620 Square/sample_square_hhmm_discrete and other files. | |
| 621 | |
| 622 <li> Included Bhaskara Marthi's gibbs_sampling_inf_engine. Currently | |
| 623 this only works if all CPDs are tabular and if you call installC. | |
| 624 | |
| 625 <li> Modified Kalman/tracking_demo so it calls plotgauss2d instead of | |
| 626 gaussplot. | |
| 627 | |
| 628 <li> Included Sonia Leach's speedup of mk_rnd_dag. | |
| 629 My version created all NchooseK subsets, and then picked among them. Sonia | |
| 630 reorders the possible parents randomly and choose | |
| 631 the first k. This saves on having to enumerate the large number of | |
| 632 possible subsets before picking from one. | |
| 633 | |
| 634 <li> Eliminated BNT/inference/static/Old, which contained some old | |
| 635 .mexglx files which wasted space. | |
| 636 </ul> | |
| 637 | |
| 638 | |
| 639 | |
| 640 <li> 2/15/02 | |
| 641 <ul> | |
| 642 <li> Removed the netlab directory, since most of it was not being | |
| 643 used, and it took up too much space (the goal is to have BNT.zip be | |
| 644 less than 1.4MB, so if fits on a floppy). | |
| 645 The required files have been copied into netlab2. | |
| 646 </ul> | |
| 647 | |
| 648 <li> 2/14/02 | |
| 649 <ul> | |
| 650 <li> Shan Huang fixed most (all?) of the bugs in his stable CG code. | |
| 651 scg1-3 now work, but scg_3node and scg_unstable give different | |
| 652 behavior than that reported in the Cowell book. | |
| 653 | |
| 654 <li> I changed gaussplot so it plots an ellipse representing the | |
| 655 eigenvectors of the covariance matrix, rather than numerically | |
| 656 evaluating the density and using a contour plot; this | |
| 657 is much faster and gives better pictures. The new function is | |
| 658 called plotgauss2d in BNT/Graphics. | |
| 659 | |
| 660 <li> Joni Alon <jalon@cs.bu.edu> fixed some small bugs: | |
| 661 mk_dhmm_obs_lik called forwards with the wrong args, and | |
| 662 add_BNT_to_path should quote filenames with spaces. | |
| 663 | |
| 664 <li> I added BNT/stats2/myunidrnd which is called by learn_struct_mcmc. | |
| 665 | |
| 666 <li> I changed BNT/potentials/@dpot/multiply_by_dpot so it now says | |
| 667 Tbig.T(:) = Tbig.T(:) .* Ts(:); | |
| 668 </ul> | |
| 669 | |
| 670 | |
| 671 <li> 2/6/02 | |
| 672 <ul> | |
| 673 <li> Added hierarchical HMMs. See BNT/examples/dynamic/HHMM and | |
| 674 CPDs/@hhmmQ_CPD and @hhmmF_CPD. | |
| 675 <li> sample_dbn can now sample until a certain condition is true. | |
| 676 <li> Sonia Leach fixed learn_struct_mcmc and changed mk_nbrs_of_digraph | |
| 677 so it only returns DAGs. | |
| 678 Click <a href="sonia_mcmc.txt">here</a> for details of her changes. | |
| 679 </ul> | |
| 680 | |
| 681 | |
| 682 <li> 2/4/02 | |
| 683 <ul> | |
| 684 <li> Wei Hu fixed a bug in | |
| 685 jtree_ndx_inf_engine/collect\distribute_evidence.c which failed when | |
| 686 maximize=1. | |
| 687 <li> | |
| 688 I fixed various bugs to do with conditional Gaussians, | |
| 689 so mixexp3 now works (thansk to Gerry Fung <gerry.fung@utoronto.ca> | |
| 690 for spotting the error). Specifically: | |
| 691 Changed softmax_CPD/convert_to_pot so it now puts cts nodes in cdom, and no longer inherits | |
| 692 this function from discrete_CPD. | |
| 693 Changed root_CPD/convert_to_put so it puts self in cdom. | |
| 694 </ul> | |
| 695 | |
| 696 | |
| 697 <li> 1/31/02 | |
| 698 <ul> | |
| 699 <li> Fixed log_lik_mhmm (thanks to ling chen <real_lingchen@yahoo.com> | |
| 700 for spotting the typo) | |
| 701 <li> Now many scripts in examples/static call cmp_inference_static. | |
| 702 Also, SCG scripts have been simplified (but still don't work!). | |
| 703 <li> belprop and belprop_fg enter_evidence now returns [engine, ll, | |
| 704 niter], with ll=0, so the order of the arguments is compatible with other engines. | |
| 705 <li> Ensured that all enter_evidence methods support optional | |
| 706 arguments such as 'maximize', even if they ignore them. | |
| 707 <li> Added Wei Hu's potentials/Tables/rep_mult.c, which is used to | |
| 708 totally eliminate all repmats from gaussian_CPD/update_ess. | |
| 709 </ul> | |
| 710 | |
| 711 | |
| 712 <li> 1/30/02 | |
| 713 <ul> | |
| 714 <li> update_ess now takes hidden_bitv instead of hidden_self and | |
| 715 hidden_ps. This allows gaussian_CPD to distinguish hidden discrete and | |
| 716 cts parents. Now learn_params_em, as well as learn_params_dbn_em, | |
| 717 passes in this info, for speed. | |
| 718 | |
| 719 <li> gaussian_CPD update_ess is now vectorized for any case where all | |
| 720 the continuous nodes are observed (eg., Gaussian HMMs, AR-HMMs). | |
| 721 | |
| 722 <li> mk_dbn now automatically detects autoregressive nodes. | |
| 723 | |
| 724 <li> hmm_inf_engine now uses indexes in marginal_nodes/family for | |
| 725 speed. Marginal_ndoes can now only handle single nodes. | |
| 726 (SDndx is hard-coded, to avoid the overhead of using marg_ndx, | |
| 727 which is slow because of the case and global statements.) | |
| 728 | |
| 729 <li> add_ev_to_dmarginal now retains the domain field. | |
| 730 | |
| 731 <li> Wei Hu wrote potentials/Tables/repmat_and_mult.c, which is used to | |
| 732 avoid some of the repmat's in gaussian_CPD/update_ess. | |
| 733 | |
| 734 <li> installC now longer sets the global USEC, since USEC is set to 0 | |
| 735 by add_BNT_to_path, even if the C files have already been compiled | |
| 736 in a previous session. Instead, gaussian_CPD checks to | |
| 737 see if repmat_and_mult exists, and (bat1, chmm1, water1, water2) | |
| 738 check to see if jtree_C_inf_engine/collect_evidence exists. | |
| 739 Note that checking if a file exists is slow, so we do the check | |
| 740 inside the gaussian_CPD constructor, not inside update_ess. | |
| 741 | |
| 742 <li> uninstallC now deletes both .mex and .dll files, just in case I | |
| 743 accidently ship a .zip file with binaries. It also deletes mex | |
| 744 files from jtree_C_inf_engine. | |
| 745 | |
| 746 <li> Now marginal_family for both jtree_limid_inf_engine and | |
| 747 global_joint_inf_engine returns a marginal structure and | |
| 748 potential, as required by solve_limid. | |
| 749 Other engines (eg. jtree_ndx, hmm) are not required to return a potential. | |
| 750 </ul> | |
| 751 | |
| 752 | |
| 753 | |
| 754 <li> 1/22/02 | |
| 755 <ul> | |
| 756 <li> Added an optional argument to mk_bnet and mk_dbn which lets you | |
| 757 add names to nodes. This uses the new assoc_array class. | |
| 758 | |
| 759 <li> Added Yimin Zhang's (unfinished) classification/regression tree | |
| 760 code to CPDs/tree_CPD. | |
| 761 | |
| 762 </ul> | |
| 763 | |
| 764 | |
| 765 | |
| 766 <li> 1/14/02 | |
| 767 <ul> | |
| 768 <li> Incorporated some of Shan Huang's (still broken) stable CG code. | |
| 769 </ul> | |
| 770 | |
| 771 | |
| 772 <li> 1/9/02 | |
| 773 <ul> | |
| 774 <li> Yimin Zhang vectorized @discrete_CPD/prob_node, which speeds up | |
| 775 structure learning considerably. I fixed this to handle softmax CPDs. | |
| 776 | |
| 777 <li> Shan Huang changed the stable conditional Gaussian code to handle | |
| 778 vector-valued nodes, but it is buggy. | |
| 779 | |
| 780 <li> I vectorized @gaussian_CPD/update_ess for a special case. | |
| 781 | |
| 782 <li> Removed denom=min(1, ... Z) from gaussian_CPD/maximize_params | |
| 783 (added to cope with negative temperature for entropic prior), which | |
| 784 gives wrong results on mhmm1. | |
| 785 </ul> | |
| 786 | |
| 787 | |
| 788 <li> 1/7/02 | |
| 789 | |
| 790 <ul> | |
| 791 <li> Removed the 'xo' typo from mk_qmr_bnet. | |
| 792 | |
| 793 <li> convert_dbn_CPDs_to_tables has been vectorized; it is now | |
| 794 substantially faster to compute the conditional likelihood for long sequences. | |
| 795 | |
| 796 <li> Simplified constructors for tabular_CPD and gaussian_CPD, so they | |
| 797 now both only take the form CPD(bnet, i, ...) for named arguments - | |
| 798 the CPD('self', i, ...) format is gone. Modified mk_fgraph_given_ev | |
| 799 to use mk_isolated_tabular_CPD instead. | |
| 800 | |
| 801 <li> Added entropic prior to tabular and Gaussian nodes. | |
| 802 For tabular_CPD, changed name of arguments to the constructor to | |
| 803 distinguish Dirichlet and entropic priors. In particular, | |
| 804 tabular_CPD(bnet, i, 'prior', 2) is now | |
| 805 tabular_CPD(bnet, i, 'prior_type', 'dirichlet', 'dirichlet_weight', 2). | |
| 806 | |
| 807 <li> Added deterministic annealing to learn_params_dbn_em for use with | |
| 808 entropic priors. The old format learn(engine, cases, max_iter) has | |
| 809 been replaced by learn(engine, cases, 'max_iter', max_iter). | |
| 810 | |
| 811 <li> Changed examples/dynamic/bat1 and kjaerulff1, since default | |
| 812 equivalence classes have changed from untied to tied. | |
| 813 </ul> | |
| 814 | |
| 815 <li> 12/30/01 | |
| 816 <ul> | |
| 817 <li> DBN default equivalence classes for slice 2 has changed, so that | |
| 818 now parameters are tied for nodes with 'equivalent' parents in slices | |
| 819 1 and 2 (e.g., observed leaf nodes). This essentially makes passing in | |
| 820 the eclass arguments redundant (hooray!). | |
| 821 </ul> | |
| 822 | |
| 823 | |
| 824 <li> 12/20/01 | |
| 825 <ul> | |
| 826 <li> <b>Released version 4</b>. | |
| 827 Version 4 is considered a major new release | |
| 828 since it is not completely backwards compatible with V3. | |
| 829 Observed nodes are now specified when the bnet/dbn is created, | |
| 830 not when the engine is created. This changes the interface to many of | |
| 831 the engines, making the code no longer backwards compatible. | |
| 832 Hence support for non-named optional arguments (BNT2 style) has also | |
| 833 been removed; hence mk_dbn etc. requires arguments to be passed by name. | |
| 834 | |
| 835 <li> Ilya Shpitser's C code for triangulation now compiles under | |
| 836 Windows as well as Unix, thanks to Wei Hu. | |
| 837 | |
| 838 <li> All the ndx engines have been combined, and now take an optional | |
| 839 argument specifying what kind of index to use. | |
| 840 | |
| 841 <li> learn_params_dbn_em is now more efficient: | |
| 842 @tabular_CPD/update_ess for nodes whose families | |
| 843 are hidden does not need need to call add_evidence_to_dmarginal, which | |
| 844 is slow. | |
| 845 | |
| 846 <li> Wei Hu fixed bug in jtree_ndxD, so now the matlab and C versions | |
| 847 both work. | |
| 848 | |
| 849 <li> dhmm_inf_engine replaces hmm_inf_engine, since the former can | |
| 850 handle any kind of topology and is slightly more efficient. dhmm is | |
| 851 extended to handle Gaussian, as well as discrete, | |
| 852 observed nodes. The new hmm_inf_engine no longer supports online | |
| 853 inference (which was broken anyway). | |
| 854 | |
| 855 <li> Added autoregressive HMM special case to hmm_inf_engine for | |
| 856 speed. | |
| 857 | |
| 858 <li> jtree_ndxSD_dbn_inf_engine now computes likelihood of the | |
| 859 evidence in a vectorized manner, where possible, just like | |
| 860 hmm_inf_engine. | |
| 861 | |
| 862 <li> Added mk_limid, and hence simplified mk_bnet and mk_dbn. | |
| 863 | |
| 864 | |
| 865 <li> Gaussian_CPD now uses 0.01*I prior on covariance matrix by | |
| 866 default. To do ML estimation, set 'cov_prior_weight' to 0. | |
| 867 | |
| 868 <li> Gaussian_CPD and tabular_CPD | |
| 869 optional binary arguments are now set using 0/1 rather no 'no'/'yes'. | |
| 870 | |
| 871 <li> Removed Shan Huang's PDAG and decomposable graph code, which will | |
| 872 be put in a separate structure learning library. | |
| 873 </ul> | |
| 874 | |
| 875 | |
| 876 <li> 12/11/01 | |
| 877 <ul> | |
| 878 <li> Wei Hu fixed jtree_ndx*_dbn_inf_engine and marg_table.c. | |
| 879 | |
| 880 <li> Shan Huang contributed his implementation of stable conditional | |
| 881 Gaussian code (Lauritzen 1999), and methods to search through the | |
| 882 space of PDAGs (Markov equivalent DAGs) and undirected decomposable | |
| 883 graphs. The latter is still under development. | |
| 884 </ul> | |
| 885 | |
| 886 | |
| 887 <li> 12/10/01 | |
| 888 <ul> | |
| 889 <li> Included Wei Hu's new versions of the ndx* routines, which use | |
| 890 integers instead of doubles. The new versions are about 5 times faster | |
| 891 in C. In general, ndxSD is the best choice. | |
| 892 | |
| 893 <li> Fixed misc/add_ev_to_dmarginal so it works with the ndx routines | |
| 894 in bat1. | |
| 895 | |
| 896 <li> Added calc_mpe_dbn to do Viterbi parsing. | |
| 897 | |
| 898 <li> Updated dhmm_inf_engine so it computes marginals. | |
| 899 </ul> | |
| 900 | |
| 901 | |
| 902 | |
| 903 <li> 11/23/01 | |
| 904 <ul> | |
| 905 <li> learn_params now does MAP estimation (i.e., uses Dirichlet prior, | |
| 906 if define). Thanks to Simon Keizer skeizer@cs.utwente.nl for spotting | |
| 907 this. | |
| 908 <li> Changed plotgraph so it calls ghostview with the output of dotty, | |
| 909 instead of converting from .ps to .tif. The resulting image is much | |
| 910 easier to read. | |
| 911 <li> Fixed cgpot/multiply_by_pots.m. | |
| 912 <li> Wei Hu fixed ind2subv.c. | |
| 913 <li> Changed arguments to compute_joint_pot. | |
| 914 </ul> | |
| 915 | |
| 916 | |
| 917 <li> 11/1/01 | |
| 918 <ul> | |
| 919 <li> Changed sparse to dense in @dpot/multiply_pots, because sparse | |
| 920 arrays apparently cause a bug in the NT version of Matlab. | |
| 921 | |
| 922 <li> Fixed the bug in gaussian_CPD/log_prob_node.m which | |
| 923 incorrectly called the vectorized gaussian_prob with different means | |
| 924 when there were continuous parents and more than one case. | |
| 925 (Thanks to Dave Andre for finding this.) | |
| 926 | |
| 927 <li> Fixed the bug in root_CPD/convert_to_pot which did not check for | |
| 928 pot_type='g'. | |
| 929 (Thanks to Dave Andre for finding this.) | |
| 930 | |
| 931 <li> Changed calc_mpe and calc_mpe_global so they now return a cell array. | |
| 932 | |
| 933 <li> Combine pearl and loopy_pearl into a single inference engine | |
| 934 called 'pearl_inf_engine', which now takes optional arguments passed | |
| 935 in using the name/value pair syntax. | |
| 936 marginal_nodes/family now takes the optional add_ev argument (same as | |
| 937 jtree), which is the opposite of the previous shrink argument. | |
| 938 | |
| 939 <li> Created pearl_unrolled_dbn_inf_engine and "resurrected" | |
| 940 pearl_dbn_inf_engine in a simplified (but still broken!) form. | |
| 941 | |
| 942 <li> Wei Hi fixed the bug in ind2subv.c, so now ndxSD works. | |
| 943 He also made C versions of ndxSD and ndxB, and added (the unfinished) ndxD. | |
| 944 | |
| 945 </ul> | |
| 946 | |
| 947 | |
| 948 <li> 10/20/01 | |
| 949 | |
| 950 <ul> | |
| 951 <li> Removed the use_ndx option from jtree_inf, | |
| 952 and created 2 new inference engines: jtree_ndxSD_inf_engine and | |
| 953 jtree_ndxB_inf_engine. | |
| 954 The former stores 2 sets of indices for the small and difference | |
| 955 domains; the latter stores 1 set of indices for the big domain. | |
| 956 In Matlab, the ndxB version is often significantly faster than ndxSD | |
| 957 and regular jree, except when the clique size is large. | |
| 958 When compiled to C, the difference between ndxB and ndxSD (in terms of | |
| 959 speed) vanishes; again, both are faster than compiled jtree, except | |
| 960 when the clique size is large. | |
| 961 Note: ndxSD currently has a bug in it, so it gives the wrong results! | |
| 962 (The DBN analogs are jtree_dbn_ndxSD_inf_engine and | |
| 963 jtree_dbn_ndxB_inf_engine.) | |
| 964 | |
| 965 <li> Removed duplicate files from the HMM and Kalman subdirectories. | |
| 966 e.g., normalise is now only in BNT/misc, so when compiled to C, it | |
| 967 masks the unique copy of the Matlab version. | |
| 968 </ul> | |
| 969 | |
| 970 | |
| 971 | |
| 972 <li> 10/17/01 | |
| 973 <ul> | |
| 974 <li> Fixed bugs introduced on 10/15: | |
| 975 Renamed extract_gaussian_CPD_params_given_ev_on_dps.m to | |
| 976 gaussian_CPD_params_given_dps.m since Matlab can't cope with such long | |
| 977 names (this caused cg1 to fail). Fixed bug in | |
| 978 gaussian_CPD/convert_to_pot, which now calls convert_to_table in the | |
| 979 discrete case. | |
| 980 | |
| 981 <li> Fixed bug in bk_inf_engine/marginal_nodes. | |
| 982 The test 'if nodes < ss' is now | |
| 983 'if nodes <= ss' (bug fix due to Stephen seg_ma@hotmail.com) | |
| 984 | |
| 985 <li> Simplified uninstallC. | |
| 986 </ul> | |
| 987 | |
| 988 | |
| 989 <li> 10/15/01 | |
| 990 <ul> | |
| 991 | |
| 992 <li> Added use_ndx option to jtree_inf and jtree_dbn_inf. | |
| 993 This pre-computes indices for multiplying, dividing and marginalizing | |
| 994 discrete potentials. | |
| 995 This is like the old jtree_fast_inf_engine, but we use an extra level | |
| 996 of indirection to reduce the number of indices needed (see | |
| 997 uid_generator object). | |
| 998 Sometimes this is faster than the original way... | |
| 999 This is work in progress. | |
| 1000 | |
| 1001 <li> The constructor for dpot no longer calls myreshape, which is very | |
| 1002 slow. | |
| 1003 But new dpots still must call myones. | |
| 1004 Hence discrete potentials are only sometimes 1D vectors (but should | |
| 1005 always be thought of as multi-D arrays). This is work in progress. | |
| 1006 </ul> | |
| 1007 | |
| 1008 | |
| 1009 <li> 10/6/01 | |
| 1010 <ul> | |
| 1011 <li> Fixed jtree_dbn_inf_engine, and added kjaerulff1 to test this. | |
| 1012 <li> Added option to jtree_inf_engine/marginal_nodes to return "full | |
| 1013 sized" marginals, even on observed nodes. | |
| 1014 <li> Clustered BK in examples/dynamic/bat1 seems to be broken, | |
| 1015 so it has been commented out. | |
| 1016 BK will be re-implemented on top of jtree_dbn, which should much more | |
| 1017 efficient. | |
| 1018 </ul> | |
| 1019 | |
| 1020 <li> 9/25/01 | |
| 1021 <ul> | |
| 1022 <li> jtree_dbn_inf_engine is now more efficient than calling BK with | |
| 1023 clusters = exact, since it only uses the interface nodes, instead of | |
| 1024 all of them, to maintain the belief state. | |
| 1025 <li> Uninstalled the broken C version of strong_elim_order. | |
| 1026 <li> Changed order of arguments to unroll_dbn_topology, so that intra1 | |
| 1027 is no longer required. | |
| 1028 <li> Eliminated jtree_onepass, which can be simulated by calling | |
| 1029 collect_evidence on jtree. | |
| 1030 <li> online1 is no longer in the test_BNT suite, since there is some | |
| 1031 problem with online prediction with mixtures of Gaussians using BK. | |
| 1032 This functionality is no longer supported, since doing it properly is | |
| 1033 too much work. | |
| 1034 </ul> | |
| 1035 </li> | |
| 1036 | |
| 1037 <li> 9/7/01 | |
| 1038 <ul> | |
| 1039 <li> Added Ilya Shpitser's C triangulation code (43x faster!). | |
| 1040 Currently this only compiles under linux; windows support is being added. | |
| 1041 </ul> | |
| 1042 | |
| 1043 | |
| 1044 <li> 9/5/01 | |
| 1045 <ul> | |
| 1046 <li> Fixed typo in CPDs/@tabular_kernel/convert_to_table (thanks, | |
| 1047 Philippe!) | |
| 1048 <li> Fixed problems with clamping nodes in tabular_CPD, learn_params, | |
| 1049 learn_params_tabular, and bayes_update_params. See | |
| 1050 BNT/examples/static/learn1 for a demo. | |
| 1051 </ul> | |
| 1052 | |
| 1053 | |
| 1054 <li> 9/3/01 | |
| 1055 <ul> | |
| 1056 <li> Fixed typo on line 87 of gaussian_CPD which caused error in cg1.m | |
| 1057 <li> Installed Wei Hu's latest version of jtree_C_inf_engine, which | |
| 1058 can now compute marginals on any clique/cluster. | |
| 1059 <li> Added Yair Weiss's code to compute the Bethe free energy | |
| 1060 approximation to the log likelihood in loopy_pearl (still need to add | |
| 1061 this to belprop). The return arguments are now: engine, loglik and | |
| 1062 niter, which is different than before. | |
| 1063 </ul> | |
| 1064 | |
| 1065 | |
| 1066 | |
| 1067 <li> 8/30/01 | |
| 1068 <ul> | |
| 1069 <li> Fixed bug in BNT/examples/static/id1 which passed hard-coded | |
| 1070 directory name to belprop_inf_engine. | |
| 1071 | |
| 1072 <li> Changed tabular_CPD and gaussian_CPD so they can now be created | |
| 1073 without having to pass in a bnet. | |
| 1074 | |
| 1075 <li> Finished mk_fgraph_given_ev. See the fg* files in examples/static | |
| 1076 for demos of factor graphs (work in progress). | |
| 1077 </ul> | |
| 1078 | |
| 1079 | |
| 1080 | |
| 1081 <li> 8/22/01 | |
| 1082 <ul> | |
| 1083 | |
| 1084 <li> Removed jtree_compiled_inf_engine, | |
| 1085 since the C code it generated was so big that it would barf on large | |
| 1086 models. | |
| 1087 | |
| 1088 <li> Tidied up the potentials/Tables directory. | |
| 1089 Removed mk_marg/mult_ndx.c, | |
| 1090 which have been superceded by the much faster mk_marg/mult_index.c | |
| 1091 (written by Wei Hu). | |
| 1092 Renamed the Matlab versions mk_marginalise/multiply_table_ndx.m | |
| 1093 to be mk_marg/mult_index.m to be compatible with the C versions. | |
| 1094 Note: nobody calls these routines anymore! | |
| 1095 (jtree_C_inf_engine/enter_softev.c has them built-in.) | |
| 1096 Removed mk_ndx.c, which was only used by jtree_compiled. | |
| 1097 Removed mk_cluster_clq_ndx.m, mk_CPD_clq_ndx, and marginalise_table.m | |
| 1098 which were not used. | |
| 1099 Moved shrink_obs_dims_in_table.m to misc. | |
| 1100 | |
| 1101 <li> In potentials/@dpot directory: removed multiply_by_pot_C_old.c. | |
| 1102 Now marginalize_pot.c can handle maximization, | |
| 1103 and divide_by_pot.c has been implmented. | |
| 1104 marginalize/multiply/divide_by_pot.m no longer have useC or genops options. | |
| 1105 (To get the C versions, use installC.m) | |
| 1106 | |
| 1107 <li> Removed useC and genops options from jtree_inf_engine.m | |
| 1108 To use the C versions, install the C code. | |
| 1109 | |
| 1110 <li> Updated BNT/installC.m. | |
| 1111 | |
| 1112 <li> Added fclose to @loopy_pearl_inf/enter_evidence. | |
| 1113 | |
| 1114 <li> Changes to MPE routines in BNT/general. | |
| 1115 The maximize parameter is now specified inside enter_evidence | |
| 1116 instead of when the engine is created. | |
| 1117 Renamed calc_mpe_given_inf_engine to just calc_mpe. | |
| 1118 Added Ron Zohar's optional fix to handle the case of ties. | |
| 1119 Now returns log-likelihood instead of likelihood. | |
| 1120 Added calc_mpe_global. | |
| 1121 Removed references to genops in calc_mpe_bucket.m | |
| 1122 Test file is now called mpe1.m | |
| 1123 | |
| 1124 <li> For DBN inference, filter argument is now passed by name, | |
| 1125 as is maximize. This is NOT BACKWARDS COMPATIBLE. | |
| 1126 | |
| 1127 <li> Removed @loopy_dbn_inf_engine, which will was too complicated. | |
| 1128 In the future, a new version, which applies static loopy to the | |
| 1129 unrolled DBN, will be provided. | |
| 1130 | |
| 1131 <li> discrete_CPD class now contains the family sizes and supports the | |
| 1132 method dom_sizes. This is because it could not access the child field | |
| 1133 CPD.sizes, and mysize(CPT) may give the wrong answer. | |
| 1134 | |
| 1135 <li> Removed all functions of the form CPD_to_xxx, where xxx = dpot, cpot, | |
| 1136 cgpot, table, tables. These have been replaced by convert_to_pot, | |
| 1137 which takes a pot_type argument. | |
| 1138 @discrete_CPD calls convert_to_table to implement a default | |
| 1139 convert_to_pot. | |
| 1140 @discrete_CPD calls CPD_to_CPT to implement a default | |
| 1141 convert_to_table. | |
| 1142 The convert_to_xxx routines take fewer arguments (no need to pass in | |
| 1143 the globals node_sizes and cnodes!). | |
| 1144 Eventually, convert_to_xxx will be vectorized, so it will operate on | |
| 1145 all nodes in the same equivalence class "simultaneously", which should | |
| 1146 be significantly quicker, at least for Gaussians. | |
| 1147 | |
| 1148 <li> Changed discrete_CPD/sample_node and prob_node to use | |
| 1149 convert_to_table, instead of CPD_to_CPT, so mlp/softmax nodes can | |
| 1150 benefit. | |
| 1151 | |
| 1152 <li> Removed @tabular_CPD/compute_lambda_msg_fast and | |
| 1153 private/prod_CPD_and_pi_msgs_fast, since no one called them. | |
| 1154 | |
| 1155 <li> Renamed compute_MLE to learn_params, | |
| 1156 by analogy with bayes_update_params (also because it may compute an | |
| 1157 MAP estimate). | |
| 1158 | |
| 1159 <li> Renamed set_params to set_fields | |
| 1160 and get_params to get_field for CPD and dpot objects, to | |
| 1161 avoid confusion with the parameters of the CPD. | |
| 1162 | |
| 1163 <li> Removed inference/doc, which has been superceded | |
| 1164 by the web page. | |
| 1165 | |
| 1166 <li> Removed inference/static/@stab_cond_gauss_inf_engine, which is | |
| 1167 broken, and all references to stable CG. | |
| 1168 | |
| 1169 </ul> | |
| 1170 | |
| 1171 | |
| 1172 | |
| 1173 | |
| 1174 | |
| 1175 <li> 8/12/01 | |
| 1176 <ul> | |
| 1177 <li> I removed potentials/@dpot/marginalize_pot_max. | |
| 1178 Now marginalize_pot for all potential classes take an optional third | |
| 1179 argument, specifying whether to sum out or max out. | |
| 1180 The dpot class also takes in optional arguments specifying whether to | |
| 1181 use C or genops (the global variable USE_GENOPS has been eliminated). | |
| 1182 | |
| 1183 <li> potentials/@dpot/marginalize_pot has been simplified by assuming | |
| 1184 that 'onto' is always in ascending order (i.e., we remove | |
| 1185 Maynard-Reid's patch). This is to keep the code identical to the C | |
| 1186 version and the other class implementations. | |
| 1187 | |
| 1188 <li> Added Ron Zohar's general/calc_mpe_bucket function, | |
| 1189 and my general/calc_mpe_given_inf_engine, for calculating the most | |
| 1190 probable explanation. | |
| 1191 | |
| 1192 | |
| 1193 <li> Added Wei Hu's jtree_C_inf_engine. | |
| 1194 enter_softev.c is about 2 times faster than enter_soft_evidence.m. | |
| 1195 | |
| 1196 <li> Added the latest version of jtree_compiled_inf_engine by Wei Hu. | |
| 1197 The 'C' ndx_method now calls potentials/Tables/mk_marg/mult_index, | |
| 1198 and the 'oldC' ndx_method calls potentials/Tables/mk_marg/mult_ndx. | |
| 1199 | |
| 1200 <li> Added potentials/@dpot/marginalize_pot_C.c and | |
| 1201 multiply_by_pot_C.c by Wei Hu. | |
| 1202 These can be called by setting the 'useC' argument in | |
| 1203 jtree_inf_engine. | |
| 1204 | |
| 1205 <li> Added BNT/installC.m to compile all the mex files. | |
| 1206 | |
| 1207 <li> Renamed prob_fully_instantiated_bnet to log_lik_complete. | |
| 1208 | |
| 1209 <li> Added Shan Huang's unfinished stable conditional Gaussian | |
| 1210 inference routines. | |
| 1211 </ul> | |
| 1212 | |
| 1213 | |
| 1214 | |
| 1215 <li> 7/13/01 | |
| 1216 <ul> | |
| 1217 <li> Added the latest version of jtree_compiled_inf_engine by Wei Hu. | |
| 1218 <li> Added the genops class by Doug Schwarz (see | |
| 1219 BNT/genopsfun/README). This provides a 1-2x speed-up of | |
| 1220 potentials/@dpot/multiply_by_pot and divide_by_pot. | |
| 1221 <li> The function BNT/examples/static/qmr_compiled compares the | |
| 1222 performance gains of these new functions. | |
| 1223 </ul> | |
| 1224 | |
| 1225 <li> 7/6/01 | |
| 1226 <ul> | |
| 1227 <li> Made bk_inf_engine use the name/value argument syntax. This can | |
| 1228 now do max-product (Viterbi) as well as sum-product | |
| 1229 (forward-backward). | |
| 1230 <li> Changed examples/static/mfa1 to use the new name/value argument | |
| 1231 syntax. | |
| 1232 </ul> | |
| 1233 | |
| 1234 | |
| 1235 <li> 6/28/01 | |
| 1236 | |
| 1237 <ul> | |
| 1238 | |
| 1239 <li> <b>Released version 3</b>. | |
| 1240 Version 3 is considered a major new release | |
| 1241 since it is not completely backwards compatible with V2. | |
| 1242 V3 supports decision and utility nodes, loopy belief propagation on | |
| 1243 general graphs (including undirected), structure learning for non-tabular nodes, | |
| 1244 a simplified way of handling optional | |
| 1245 arguments to functions, | |
| 1246 and many other features which are described below. | |
| 1247 In addition, the documentation has been substantially rewritten. | |
| 1248 | |
| 1249 <li> The following functions can now take optional arguments specified | |
| 1250 as name/value pairs, instead of passing arguments in a fixed order: | |
| 1251 mk_bnet, jtree_inf_engine, tabular_CPD, gaussian_CPD, softmax_CPD, mlp_CPD, | |
| 1252 enter_evidence. | |
| 1253 This is very helpful if you want to use default values for most parameters. | |
| 1254 The functions remain backwards compatible with BNT2. | |
| 1255 | |
| 1256 <li> dsoftmax_CPD has been renamed softmax_CPD, replacing the older | |
| 1257 version of softmax. The directory netlab2 has been updated, and | |
| 1258 contains weighted versions of some of the learning routines in netlab. | |
| 1259 (This code is still being developed by P. Brutti.) | |
| 1260 | |
| 1261 <li> The "fast" versions of the inference engines, which generated | |
| 1262 matlab code, have been removed. | |
| 1263 @jtree_compiled_inf_engine now generates C code. | |
| 1264 (This feature is currently being developed by Wei Hu of Intel (China), | |
| 1265 and is not yet ready for public use.) | |
| 1266 | |
| 1267 <li> CPD_to_dpot, CPD_to_cpot, CPD_to_cgpot and CPD_to_upot | |
| 1268 are in the process of being replaced by convert_to_pot. | |
| 1269 | |
| 1270 <li> determine_pot_type now takes as arguments (bnet, onodes) | |
| 1271 instead of (onodes, cnodes, dag), | |
| 1272 so it can detect the presence of utility nodes as well as continuous | |
| 1273 nodes. | |
| 1274 Hence this function is not backwards compatible with BNT2. | |
| 1275 | |
| 1276 <li> The structure learning code (K2, mcmc) now works with any node | |
| 1277 type, not just tabular. | |
| 1278 mk_bnets_tabular has been eliminated. | |
| 1279 bic_score_family and dirichlet_score_family will be replaced by score_family. | |
| 1280 Note: learn_struct_mcmc has a new interface that is not backwards | |
| 1281 compatible with BNT2. | |
| 1282 | |
| 1283 <li> update_params_complete has been renamed bayes_update_params. | |
| 1284 Also, learn_params_tabular has been replaced by learn_params, which | |
| 1285 works for any CPD type. | |
| 1286 | |
| 1287 <li> Added decision/utility nodes. | |
| 1288 </ul> | |
| 1289 | |
| 1290 | |
| 1291 <li> 6/6/01 | |
| 1292 <ul> | |
| 1293 <li> Added soft evidence to jtree_inf_engine. | |
| 1294 <li> Changed the documentation slightly (added soft evidence and | |
| 1295 parameter tying, and separated parameter and structure learning). | |
| 1296 <li> Changed the parameters of determine_pot_type, so it no longer | |
| 1297 needs to be passed a DAG argument. | |
| 1298 <li> Fixed parameter tying in mk_bnet (num. CPDs now equals num. equiv | |
| 1299 classes). | |
| 1300 <li> Made learn_struct_mcmc work in matlab version 5.2 (thanks to | |
| 1301 Nimrod Megiddo for finding this bug). | |
| 1302 <li> Made 'acyclic.m' work for undirected graphs. | |
| 1303 </ul> | |
| 1304 | |
| 1305 | |
| 1306 <li> 5/23/01 | |
| 1307 <ul> | |
| 1308 <li> Added Tamar Kushnir's code for the IC* algorithm | |
| 1309 (learn_struct_pdag_ic_star). This learns the | |
| 1310 structure of a PDAG, and can identify the presence of latent | |
| 1311 variables. | |
| 1312 | |
| 1313 <li> Added Yair Weiss's code for computing the MAP assignment using | |
| 1314 junction tree (i.e., a new method called @dpot/marginalize_pot_max | |
| 1315 instead of marginalize_pot.) | |
| 1316 | |
| 1317 <li> Added @discrete_CPD/prob_node in addition to log_prob_node to handle | |
| 1318 deterministic CPDs. | |
| 1319 </ul> | |
| 1320 | |
| 1321 | |
| 1322 <li> 5/12/01 | |
| 1323 <ul> | |
| 1324 <li> Pierpaolo Brutti updated his mlp and dsoftmax CPD classes, | |
| 1325 and improved the HME code. | |
| 1326 | |
| 1327 <li> HME example now added to web page. (The previous example was non-hierarchical.) | |
| 1328 | |
| 1329 <li> Philippe Leray (author of the French documentation for BNT) | |
| 1330 pointed out that I was including netlab.tar unnecessarily. | |
| 1331 </ul> | |
| 1332 | |
| 1333 | |
| 1334 <li> 5/4/01 | |
| 1335 <ul> | |
| 1336 <li> Added mlp_CPD which defines a CPD as a (conditional) multi-layer perceptron. | |
| 1337 This class was written by Pierpaolo Brutti. | |
| 1338 | |
| 1339 <li> Added hierarchical mixtures of experts demo (due to Pierpaolo Brutti). | |
| 1340 | |
| 1341 <li> Fixed some bugs in dsoftmax_CPD. | |
| 1342 | |
| 1343 <li> Now the BNT distribution includes the whole | |
| 1344 <a href="http://www.ncrg.aston.ac.uk/netlab/">Netlab</a> library in a | |
| 1345 subdirectory. | |
| 1346 It also includes my HMM and Kalman filter toolboxes, instead of just | |
| 1347 fragments of them. | |
| 1348 </ul> | |
| 1349 | |
| 1350 | |
| 1351 <li> 5/2/01 | |
| 1352 <ul> | |
| 1353 <li> gaussian_inf_engine/enter_evidence now correctly returns the | |
| 1354 loglik, even if all nodes are instantiated (bug fix due to | |
| 1355 Michael Robert James). | |
| 1356 | |
| 1357 <li> Added dsoftmax_CPD which allows softmax nodes to have discrete | |
| 1358 and continuous parents; the discrete parents act as indices into the | |
| 1359 parameters for the continuous node, by analogy with conditional | |
| 1360 Gaussian nodes. This class was written by Pierpaolo Brutti. | |
| 1361 </ul> | |
| 1362 | |
| 1363 | |
| 1364 <li> 3/27/01 | |
| 1365 <ul> | |
| 1366 <li> learn_struct_mcmc no longer returns sampled_bitv. | |
| 1367 <li> Added mcmc_sample_to_hist to post-process the set of samples. | |
| 1368 </ul> | |
| 1369 | |
| 1370 <li> 3/21/01 | |
| 1371 <ul> | |
| 1372 <li> Changed license from UC to GNU Library GPL. | |
| 1373 | |
| 1374 <li> Made all CPD constructors accept 0 arguments, so now bnets can be | |
| 1375 saved to and loaded from files. | |
| 1376 | |
| 1377 <li> Improved the implementation of sequential and batch Bayesian | |
| 1378 parameter learning for tabular CPDs with completely observed data (see | |
| 1379 log_marg_lik_complete and update_params_complete). This code also | |
| 1380 handles interventional data. | |
| 1381 | |
| 1382 <li> Added MCMC structure learning for completely observed, discrete, | |
| 1383 static BNs. | |
| 1384 | |
| 1385 <li> Started implementing Bayesian estimation of linear Gaussian | |
| 1386 nodes. See root_gaussian_CPD and | |
| 1387 linear_gaussian_CPD. The old gaussian_CPD class has not been changed. | |
| 1388 | |
| 1389 <li> Renamed evaluate_CPD to log_prob_node, and simplified its | |
| 1390 arguments. | |
| 1391 | |
| 1392 <li> Renamed sample_CPD to sample_node, simplified its | |
| 1393 arguments, and vectorized it. | |
| 1394 | |
| 1395 <li> Renamed "learn_params_tabular" to "update_params_complete". | |
| 1396 This does Bayesian updating, but no longer computes the BIC score. | |
| 1397 | |
| 1398 <li> Made routines for completely observed networks (sampling, | |
| 1399 complete data likelihood, etc.) handle cell arrays or regular arrays, | |
| 1400 which are faster. | |
| 1401 If some nodes are not scalars, or are hidden, you must use cell arrays. | |
| 1402 You must convert to a cell array before passing to an inference routine. | |
| 1403 | |
| 1404 <li> Fixed bug in gaussian_CPD constructor. When creating CPD with | |
| 1405 more than 1 discrete parent with random parameters, the matrices were | |
| 1406 the wrong shape (Bug fix due to Xuejing Sun). | |
| 1407 </ul> | |
| 1408 | |
| 1409 | |
| 1410 | |
| 1411 <li> 11/24/00 | |
| 1412 <ul> | |
| 1413 <li> Renamed learn_params and learn_params_dbn to learn_params_em/ | |
| 1414 learn_params_dbn_em. The return arguments are now [bnet, LLtrace, | |
| 1415 engine] instead of [engine, LLtrace]. | |
| 1416 <li> Added structure learning code for static nets (K2, PC). | |
| 1417 <li> Renamed learn_struct_inter_full_obs as learn_struct_dbn_reveal, | |
| 1418 and reimplemented it to make it simpler and faster. | |
| 1419 <li> Added sequential Bayesian parameter learning (learn_params_tabular). | |
| 1420 <li> Major rewrite of the documentation. | |
| 1421 </ul> | |
| 1422 | |
| 1423 <!-- | |
| 1424 <li> 6/1/00 | |
| 1425 <ul> | |
| 1426 <li> Subtracted 1911 off the counter, so now it counts hits from | |
| 1427 5/22/00. (The initial value of 1911 was a conservative lower bound on the number of | |
| 1428 hits from the time the page was created.) | |
| 1429 </ul> | |
| 1430 --> | |
| 1431 | |
| 1432 <li> 5/22/00 | |
| 1433 <ul> | |
| 1434 <li> Added online filtering and prediction. | |
| 1435 <li> Added the factored frontier and loopy_dbn algorithms. | |
| 1436 <li> Separated the online user manual into two, for static and dynamic | |
| 1437 networks. | |
| 1438 <!-- | |
| 1439 <li> Added a counter to the BNT web page, and initialized it to 1911, | |
| 1440 which is the number of people who have downloaded my software (BNT and | |
| 1441 other toolboxes) since 8/24/98. | |
| 1442 --> | |
| 1443 <li> Added a counter to the BNT web page. | |
| 1444 <!-- | |
| 1445 Up to this point, 1911 people had downloaded my software (BNT and | |
| 1446 other toolboxes) since 8/24/98. | |
| 1447 --> | |
| 1448 </ul> | |
| 1449 | |
| 1450 | |
| 1451 <li> 4/27/00 | |
| 1452 <ul> | |
| 1453 <li> Fixed the typo in bat1.m | |
| 1454 <li> Added preliminary code for online inference in DBNs | |
| 1455 <li> Added coupled HMM example | |
| 1456 </ul> | |
| 1457 | |
| 1458 <li> 4/23/00 | |
| 1459 <ul> | |
| 1460 <li> Fixed the bug in the fast inference routines where the indices | |
| 1461 are empty (arises in bat1.m). | |
| 1462 <li> Sped up marginal_family for the fast engines by precomputing indices. | |
| 1463 </ul> | |
| 1464 | |
| 1465 <li> 4/17/00 | |
| 1466 <ul> | |
| 1467 <li> Simplified implementation of BK_inf_engine by using soft evidence. | |
| 1468 <li> Added jtree_onepass_inf_engine (which computes a single marginal) | |
| 1469 and modified jtree_dbn_fast to use it. | |
| 1470 </ul> | |
| 1471 | |
| 1472 <li> 4/14/00 | |
| 1473 <ul> | |
| 1474 <li> Added fast versions of jtree and BK, which are | |
| 1475 designed for models where the division into hidden/observed is fixed, | |
| 1476 and all hidden variables are discrete. These routines are 2-3 times | |
| 1477 faster than their non-fast counterparts. | |
| 1478 | |
| 1479 <li> Added graph drawing code | |
| 1480 contributed by Ali Taylan Cemgil from the University of Nijmegen. | |
| 1481 </ul> | |
| 1482 | |
| 1483 <li> 4/10/00 | |
| 1484 <ul> | |
| 1485 <li> Distinguished cnodes and cnodes_slice in DBNs so that kalman1 | |
| 1486 works with BK. | |
| 1487 <li> Removed dependence on cellfun (which only exists in matlab 5.3) | |
| 1488 by adding isemptycell. Now the code works in 5.2. | |
| 1489 <li> Changed the UC copyright notice. | |
| 1490 </ul> | |
| 1491 | |
| 1492 | |
| 1493 | |
| 1494 <li> 3/29/00 | |
| 1495 <ul> | |
| 1496 <li><b>Released BNT 2.0</b>, now with objects! | |
| 1497 Here are the major changes. | |
| 1498 | |
| 1499 <li> There are now 3 classes of objects in BNT: | |
| 1500 Conditional Probability Distributions, potentials (for junction tree), | |
| 1501 and inference engines. | |
| 1502 Making an inference algorithm (junction tree, sampling, loopy belief | |
| 1503 propagation, etc.) an object might seem counter-intuitive, but in | |
| 1504 fact turns out to be a good idea, since the code and documentation | |
| 1505 can be made modular. | |
| 1506 (In Java, each algorithm would be a class that implements the | |
| 1507 inferenceEngine interface. Since Matlab doesn't support interfaces, | |
| 1508 inferenceEngine is an abstract (virtual) base class.) | |
| 1509 | |
| 1510 <p> | |
| 1511 <li> | |
| 1512 In version 1, instead of Matlab's built-in objects, | |
| 1513 I used structs and a | |
| 1514 simulated dispatch mechanism based on the type-tag system in the | |
| 1515 classic textbook by Abelson | |
| 1516 and Sussman ("Structure and Interpretation of Computer Programs", | |
| 1517 MIT Press, 1985). This required editing the dispatcher every time a | |
| 1518 new object type was added. It also required unique (and hence long) | |
| 1519 names for each method, and allowed the user unrestricted access to | |
| 1520 the internal state of objects. | |
| 1521 | |
| 1522 <p> | |
| 1523 <li> The Bayes net itself is now a lightweight struct, and can be | |
| 1524 used to specify a model independently of the inference algorithm used | |
| 1525 to process it. | |
| 1526 In version 1, the inference engine was stored inside the Bayes net. | |
| 1527 | |
| 1528 <!-- | |
| 1529 See the list of <a href="differences2.html">changes from version | |
| 1530 1</a>. | |
| 1531 --> | |
| 1532 </ul> | |
| 1533 | |
| 1534 | |
| 1535 | |
| 1536 <li> 11/24/99 | |
| 1537 <ul> | |
| 1538 <li> Added fixed lag smoothing, online EM and the ability to learn | |
| 1539 switching HMMs (POMDPs) to the HMM toolbox. | |
| 1540 <li> Renamed the HMM toolbox function 'mk_dhmm_obs_mat' to | |
| 1541 'mk_dhmm_obs_lik', and similarly for ghmm and mhmm. Updated references | |
| 1542 to these functions in BNT. | |
| 1543 <li> Changed the order of return params from kalman_filter to make it | |
| 1544 more natural. Updated references to this function in BNT. | |
| 1545 </ul> | |
| 1546 | |
| 1547 | |
| 1548 | |
| 1549 <li>10/27/99 | |
| 1550 <ul> | |
| 1551 <li>Fixed line 42 of potential/cg/marginalize_cgpot and lines 32-39 of bnet/add_evidence_to_marginal | |
| 1552 (thanks to Rainer Deventer for spotting these bugs!) | |
| 1553 </ul> | |
| 1554 | |
| 1555 | |
| 1556 <li>10/21/99 | |
| 1557 <ul> | |
| 1558 <li>Completely changed the blockmatrix class to make its semantics | |
| 1559 more sensible. The constructor is not backwards compatible! | |
| 1560 </ul> | |
| 1561 | |
| 1562 <li>10/6/99 | |
| 1563 <ul> | |
| 1564 <li>Fixed all_vals = cat(1, vals{:}) in user/enter_evidence | |
| 1565 <li>Vectorized ind2subv and sub2indv and removed the C versions. | |
| 1566 <li>Made mk_CPT_from_mux_node much faster by having it call vectorized | |
| 1567 ind2subv | |
| 1568 <li>Added Sondhauss's bug fix to line 68 of bnet/add_evidence_to_marginal | |
| 1569 <li>In dbn/update_belief_state, instead of adding eps to likelihood if 0, | |
| 1570 we leave it at 0, and set the scale factor to 0 instead of dividing. | |
| 1571 </ul> | |
| 1572 | |
| 1573 <li>8/19/99 | |
| 1574 <ul> | |
| 1575 <li>Added Ghahramani's mfa code to examples directory to compare with | |
| 1576 fa1, which uses BNT | |
| 1577 <li>Changed all references of assoc to stringmatch (e.g., in | |
| 1578 examples/mk_bat_topology) | |
| 1579 </ul> | |
| 1580 | |
| 1581 <li>June 1999 | |
| 1582 <ul> | |
| 1583 <li><b>Released BNT 1.0</b> on the web. | |
| 1584 </ul> | |
| 1585 | |
| 1586 | |
| 1587 <li>August 1998 | |
| 1588 <ul> | |
| 1589 <li><b>Released BNT 0.0</b> via email. | |
| 1590 </ul> | |
| 1591 | |
| 1592 | |
| 1593 <li>October 1997 | |
| 1594 <ul> | |
| 1595 <li>First started working on Matlab version of BNT. | |
| 1596 </ul> | |
| 1597 | |
| 1598 <li>Summer 1997 | |
| 1599 <ul> | |
| 1600 <li> First started working on C++ version of BNT while working at DEC (now Compaq) CRL. | |
| 1601 </ul> | |
| 1602 | |
| 1603 <!-- | |
| 1604 <li>Fall 1996 | |
| 1605 <ul> | |
| 1606 <li>Made a C++ program that generates DBN-specific C++ code | |
| 1607 for inference using the frontier algorithm. | |
| 1608 </ul> | |
| 1609 | |
| 1610 <li>Fall 1995 | |
| 1611 <ul> | |
| 1612 <li>Arrive in Berkeley, and first learn about Bayes Nets. Start using | |
| 1613 Geoff Zweig's C++ code. | |
| 1614 </ul> | |
| 1615 --> | |
| 1616 | |
| 1617 </ul> |
