wolffd@0:
History of changes to BNT
wolffd@0: History of changes to BNT
wolffd@0:
wolffd@0:
wolffd@0: Changes since 4 Oct 2007
wolffd@0:
wolffd@0:
wolffd@0: - 19 Oct 07 murphyk
wolffd@0:
wolffd@0: * BNT\CPDs\@noisyor_CPD\CPD_to_CPT.m: 2nd half of the file is a repeat
wolffd@0: of the first half and was deleted (thanks to Karl Kuschner)
wolffd@0:
wolffd@0: * KPMtools\myismember.m should return logical for use in "assert" so add line at end
wolffd@0: p=logical(p); this prevents "assert" from failing on an integer input.
wolffd@0: (thanks to Karl Kuschner)
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0: - 17 Oct 07 murphyk
wolffd@0:
wolffd@0: * Updated subv2ind and ind2subv in KPMtools to Tom Minka's implementation.
wolffd@0: His ind2subv is faster (vectorized), but I had to modify it so it
wolffd@0: matched the behavior of my version when called with siz=[].
wolffd@0: His subv2inv is slightly simpler than mine because he does not treat
wolffd@0: the siz=[2 2 ... 2] case separately.
wolffd@0: Note: there is now no need to ever use the C versions of these
wolffd@0: functions (or any others, for that matter).
wolffd@0:
wolffd@0: * removed BNT/add_BNT_to_path since no longer needed.
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0: - 4 Oct 07 murphyk
wolffd@0:
wolffd@0: * moved code from sourceforge to UBC website, made version 1.0.4
wolffd@0:
wolffd@0: * @pearl_inf_engine/pearl_inf_engine line 24, default
wolffd@0: argument for protocol changed from [] to 'parallel'.
wolffd@0: Also, changed private/parallel_protocol so it doesn't write to an
wolffd@0: empty file id (Matlab 7 issue)
wolffd@0:
wolffd@0: * added foptions (Matlab 7 issue)
wolffd@0:
wolffd@0: * changed genpathKPM to exclude svn. Put it in toplevel directory to
wolffd@0: massively simplify the installation process.
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0: Sourceforge changelog
wolffd@0:
wolffd@0: BNT was first ported to sourceforge on 28 July 2001 by yozhik.
wolffd@0: BNT was removed from sourceforge on 4 October 2007 by Kevin Murphy;
wolffd@0: that version is cached as FullBNT-1.0.3.zip.
wolffd@0: See Changelog from
wolffd@0: sourceforge for a history of that version of the code,
wolffd@0: which formed the basis of the branch currently on Murphy's web page.
wolffd@0:
wolffd@0:
wolffd@0: Changes from August 1998 -- July 2004
wolffd@0:
wolffd@0: Kevin Murphy made the following changes to his own private copy.
wolffd@0: (Other small changes were made between July 2004 and October 2007, but were
wolffd@0: not documented.)
wolffd@0: These may or may not be reflected in the sourceforge version of the
wolffd@0: code (which was independently maintained).
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0: - 9 June 2004
wolffd@0:
wolffd@0: - Changed tabular_CPD/learn_params back to old syntax, to make it
wolffd@0: compatible with gaussian_CPD/learn_params (and re-enabled
wolffd@0: generic_CPD/learn_params).
wolffd@0: Modified learning/learn_params.m and learning/score_family
wolffd@0: appropriately.
wolffd@0: (In particular, I undid the change Sonia Leach had to make to
wolffd@0: score_family to handle this asymmetry.)
wolffd@0: Added examples/static/gaussian2 to test this new functionality.
wolffd@0:
wolffd@0:
- Added bp_mrf2 (for generic pairwise MRFs) to
wolffd@0: inference/static/@bp_belprop_mrf2_inf_engine. [MRFs are not
wolffd@0: "officially" supported in BNT, so this code is just for expert
wolffd@0: hackers.]
wolffd@0:
wolffd@0:
- Added examples/static/nodeorderExample.m to illustrate importance
wolffd@0: of using topological ordering.
wolffd@0:
wolffd@0:
- Ran dos2unix on all *.c files within BNT to eliminate compiler
wolffd@0: warnings.
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0: - 7 June 2004
wolffd@0:
wolffd@0: - Replaced normaliseC with normalise in HMM/fwdback, for maximum
wolffd@0: portability (and negligible loss in speed).
wolffd@0:
- Ensured FullBNT versions of HMM, KPMstats etc were as up-to-date
wolffd@0: as stand-alone versions.
wolffd@0:
- Changed add_BNT_to_path so it no longer uses addpath(genpath()),
wolffd@0: which caused Old versions of files to mask new ones.
wolffd@0:
wolffd@0:
wolffd@0: - 18 February 2004
wolffd@0:
wolffd@0: - A few small bug fixes to BNT, as posted to the Yahoo group.
wolffd@0:
- Several new functions added to KPMtools, KPMstats and Graphviz
wolffd@0: (none needed by BNT).
wolffd@0:
- Added CVS to some of my toolboxes.
wolffd@0:
wolffd@0:
wolffd@0: - 30 July 2003
wolffd@0:
wolffd@0: - qian.diao fixed @mpot/set_domain_pot and @cgpot/set_domain_pot
wolffd@0:
- Marco Grzegorczyk found, and Sonia Leach fixed, a bug in
wolffd@0: do_removal inside learn_struct_mcmc
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0: - 28 July 2003
wolffd@0:
wolffd@0: - Sebastian Luehr provided 2 minor bug fixes, to HMM/fwdback (if any(scale==0))
wolffd@0: and FullBNT\HMM\CPDs\@hhmmQ_CPD\update_ess.m (wrong transpose).
wolffd@0:
wolffd@0:
wolffd@0: - 8 July 2003
wolffd@0:
wolffd@0: - Removed buggy BNT/examples/static/MRF2/Old/mk_2D_lattice.m which was
wolffd@0: masking correct graph/mk_2D_lattice.
wolffd@0:
- Fixed bug in graph/mk_2D_lattice_slow in the non-wrap-around case
wolffd@0: (line 78)
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0: - 2 July 2003
wolffd@0:
wolffd@0: - Sped up normalize(., 1) in KPMtools by avoiding general repmat
wolffd@0:
- Added assign_cols and marginalize_table to KPMtools
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0: - 29 May 2003
wolffd@0:
wolffd@0: - Modified KPMstats/mixgauss_Mstep so it repmats Sigma in the tied
wolffd@0: covariance case (bug found by galt@media.mit.edu).
wolffd@0:
wolffd@0:
- Bob Welch found bug in gaussian_CPDs/maximize_params in the way
wolffd@0: cpsz was computed.
wolffd@0:
wolffd@0:
- Added KPMstats/mixgauss_em, because my code is easier to
wolffd@0: understand/modify than netlab's (at least for me!).
wolffd@0:
wolffd@0:
- Modified BNT/examples/dynamic/viterbi1 to call multinomial_prob
wolffd@0: instead of mk_dhmm_obs_lik.
wolffd@0:
wolffd@0:
- Moved parzen window and partitioned models code to KPMstats.
wolffd@0:
wolffd@0:
- Rainer Deventer fixed some bugs in his scgpot code, as follows:
wolffd@0: 1. complement_pot.m
wolffd@0: Problems occured for probabilities equal to zero. The result is an
wolffd@0: division by zero error.
wolffd@0:
wolffd@0: 2. normalize_pot.m
wolffd@0: This function is used during the calculation of the log-likelihood.
wolffd@0: For a probability of zero a warning "log of zero" occurs. I have not
wolffd@0: realy fixed the bug. As a workaround I suggest to calculate the
wolffd@0: likelihhod based on realmin (the smallest real number) instead of
wolffd@0: zero.
wolffd@0:
wolffd@0: 3. recursive_combine_pots
wolffd@0: At the beginning of the function there was no test for the trivial case,
wolffd@0: which defines the combination of two potentials as equal to the direct
wolffd@0: combination. The result might be an infinite recursion which leads to
wolffd@0: a stack overflow in matlab.
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0: - 11 May 2003
wolffd@0:
wolffd@0: - Fixed bug in gaussian_CPD/maximize_params so it is compatible
wolffd@0: with the new clg_Mstep routine
wolffd@0:
- Modified KPMstats/cwr_em to handle single cluster case
wolffd@0: separately.
wolffd@0:
- Fixed bug in netlab/gmminit.
wolffd@0:
- Added hash tables to KPMtools.
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0: - 4 May 2003
wolffd@0:
wolffd@0: -
wolffd@0: Renamed many functions in KPMstats so the name of the
wolffd@0: distribution/model type comes first,
wolffd@0: Mstep_clg -> clg_Mstep,
wolffd@0: Mstep_cond_gauss -> mixgauss_Mstep.
wolffd@0: Also, renamed eval_pdf_xxx functions to xxx_prob, e.g.
wolffd@0: eval_pdf_cond_mixgauss -> mixgauss_prob.
wolffd@0: This is simpler and shorter.
wolffd@0:
wolffd@0:
-
wolffd@0: Renamed many functions in HMM toolbox so the name of the
wolffd@0: distribution/model type comes first,
wolffd@0: log_lik_mhmm -> mhmm_logprob, etc.
wolffd@0: mk_arhmm_obs_lik has finally been re-implemented in terms of clg_prob
wolffd@0: and mixgauss_prob (for slice 1).
wolffd@0: Removed the Demos directory, and put them in the main directory.
wolffd@0: This code is not backwards compatible.
wolffd@0:
wolffd@0:
- Removed some of the my_xxx functions from KPMstats (these were
wolffd@0: mostly copies of functions from the Mathworks stats toolbox).
wolffd@0:
wolffd@0:
wolffd@0:
- Modified BNT to take into account changes to KPMstats and
wolffd@0: HMM toolboxes.
wolffd@0:
wolffd@0:
- Fixed KPMstats/Mstep_clg (now called clg_Mstep) for spherical Gaussian case.
wolffd@0: (Trace was wrongly parenthesised, and I used YY instead of YTY.
wolffd@0: The spherical case now gives the same result as the full case
wolffd@0: for cwr_demo.)
wolffd@0: Also, mixgauss_Mstep now adds 0.01 to the ML estimate of Sigma,
wolffd@0: to act as a regularizer (it used to add 0.01 to E[YY'], but this was
wolffd@0: ignored in the spherical case).
wolffd@0:
wolffd@0:
- Added cluster weighted regression to KPMstats.
wolffd@0:
wolffd@0:
- Added KPMtools/strmatch_substr.
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0: - 28 Mar 03
wolffd@0:
wolffd@0: - Added mc_stat_distrib and eval_pdf_cond_prod_parzen to KPMstats
wolffd@0:
- Fixed GraphViz/arrow.m incompatibility with matlab 6.5
wolffd@0: (replace all NaN's with 0).
wolffd@0: Modified GraphViz/graph_to_dot so it also works on windows.
wolffd@0:
- I removed dag_to_jtree and added graph_to_jtree to the graph
wolffd@0: toolbox; the latter expects an undirected graph as input.
wolffd@0:
- I added triangulate_2Dlattice_demo.m to graph.
wolffd@0:
- Rainer Deventer fixed the stable conditional Gaussian potential
wolffd@0: classes (scgpot and scgcpot) and inference engine
wolffd@0: (stab_cond_gauss_inf_engine).
wolffd@0:
- Rainer Deventer added (stable) higher-order Markov models (see
wolffd@0: inference/dynamic/@stable_ho_inf_engine).
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0: - 14 Feb 03
wolffd@0:
wolffd@0: - Simplified learning/learn_params so it no longer returns BIC
wolffd@0: score. Also, simplified @tabular_CPD/learn_params so it only takes
wolffd@0: local evidence.
wolffd@0: Added learn_params_dbn, which does ML estimation of fully observed
wolffd@0: DBNs.
wolffd@0:
- Vectorized KPMstats/eval_pdf_cond_mixgauss for tied Sigma
wolffd@0: case (much faster!).
wolffd@0: Also, now works in log-domain to prevent underflow.
wolffd@0: eval_pdf_mixgauss now calls eval_pdf_cond_mixgauss and inherits these benefits.
wolffd@0:
- add_BNT_to_path now calls genpath with 2 arguments if using
wolffd@0: matlab version 5.
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0: - 30 Jan 03
wolffd@0:
wolffd@0: - Vectorized KPMstats/eval_pdf_cond_mixgauss for scalar Sigma
wolffd@0: case (much faster!)
wolffd@0:
- Renamed mk_dotfile_from_hmm to draw_hmm and moved it to the
wolffd@0: GraphViz library.
wolffd@0:
- Rewrote @gaussian_CPD/maximize_params.m so it calls
wolffd@0: KPMstats/Mstep_clg.
wolffd@0: This fixes bug when using clamped means (found by Rainer Deventer
wolffd@0: and Victor Eruhimov)
wolffd@0: and a bug when using a Wishart prior (no gamma term in the denominator).
wolffd@0: It is also easier to read.
wolffd@0: I rewrote the technical report re-deriving all the equations in a
wolffd@0: clearer notation, making the solution to the bugs more obvious.
wolffd@0: (See www.ai.mit.edu/~murphyk/Papers/learncg.pdf)
wolffd@0: Modified Mstep_cond_gauss to handle priors.
wolffd@0:
- Fixed bug reported by Ramgopal Mettu in which add_BNT_to_path
wolffd@0: calls genpath with only 1 argument, whereas version 5 requires 2.
wolffd@0:
- Fixed installC and uninstallC to search in FullBNT/BNT.
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0: - 24 Jan 03
wolffd@0:
wolffd@0: - Major simplification of HMM code.
wolffd@0: The API is not backwards compatible.
wolffd@0: No new functionality has been added, however.
wolffd@0: There is now only one fwdback function, instead of 7;
wolffd@0: different behaviors are controlled through optional arguments.
wolffd@0: I renamed 'evaluate observation likelihood' (local evidence)
wolffd@0: to 'evaluate conditional pdf', since this is more general.
wolffd@0: i.e., renamed
wolffd@0: mk_dhmm_obs_lik to eval_pdf_cond_multinomial,
wolffd@0: mk_ghmm_obs_lik to eval_pdf_cond_gauss,
wolffd@0: mk_mhmm_obs_lik to eval_pdf_cond_mog.
wolffd@0: These functions have been moved to KPMstats,
wolffd@0: so they can be used by other toolboxes.
wolffd@0: ghmm's have been eliminated, since they are just a special case of
wolffd@0: mhmm's with M=1 mixture component.
wolffd@0: mixgauss HMMs can now handle a different number of
wolffd@0: mixture components per state.
wolffd@0: init_mhmm has been eliminated, and replaced with init_cond_mixgauss
wolffd@0: (in KPMstats) and mk_leftright/rightleft_transmat.
wolffd@0: learn_dhmm can no longer handle inputs (although this is easy to add back).
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0: - 20 Jan 03
wolffd@0:
wolffd@0: - Added arrow.m to GraphViz directory, and commented out line 922,
wolffd@0: in response to a bug report.
wolffd@0:
wolffd@0:
wolffd@0: - 18 Jan 03
wolffd@0:
wolffd@0: - Major restructuring of BNT file structure:
wolffd@0: all code that is not specific to Bayes nets has been removed;
wolffd@0: these packages must be downloaded separately. (Or just download FullBNT.)
wolffd@0: This makes it easier to ensure different toolboxes are consistent.
wolffd@0: misc has been slimmed down and renamed KPMtools, so it can be shared by other toolboxes,
wolffd@0: such as HMM and Kalman; some of the code has been moved to BNT/general.
wolffd@0: The Graphics directory has been slimmed down and renamed GraphViz.
wolffd@0: The graph directory now has no dependence on BNT (dag_to_jtree has
wolffd@0: been renamed graph_to_jtree and has a new API).
wolffd@0: netlab2 no longer contains any netlab files, only netlab extensions.
wolffd@0: None of the functionality has changed.
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0: - 11 Jan 03
wolffd@0:
wolffd@0: - jtree_dbn_inf_engine can now support soft evidence.
wolffd@0:
wolffd@0:
- Rewrote graph/dfs to make it clearer.
wolffd@0: Return arguments have changed, as has mk_rooted_tree.
wolffd@0: The acyclicity check for large undirected graphs can cause a stack overflow.
wolffd@0: It turns out that this was not a bug, but is because Matlab's stack depth
wolffd@0: bound is very low by default.
wolffd@0:
wolffd@0:
- Renamed examples/dynamic/filter2 to filter_test1, so it does not
wolffd@0: conflict with the filter2 function in the image processing toolbox.
wolffd@0:
wolffd@0:
- Ran test_BNT on various versions of matlab to check compatibility.
wolffd@0: On matlab 6.5 (r13), elapsed time = 211s, cpu time = 204s.
wolffd@0: On matlab 6.1 (r12), elapsed time = 173s, cpu time = 164s.
wolffd@0: On matlab 5.3 (r11), elapsed time = 116s, cpu time = 114s.
wolffd@0: So matlab is apparently getting slower with time!!
wolffd@0: (All results were with a linux PIII machine.)
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0: - 14 Nov 02
wolffd@0:
wolffd@0: - Removed all ndx inference routines, since they are only
wolffd@0: marginally faster on toy problems,
wolffd@0: and are slower on large problems due to having to store and lookup
wolffd@0: the indices (causes cache misses).
wolffd@0: In particular, I removed jtree_ndx_inf_eng and jtree_ndx_dbn_inf_eng, all the *ndx*
wolffd@0: routines from potentials/Tables, and all the UID stuff from
wolffd@0: add_BNT_to_path,
wolffd@0: thus simplifying the code.
wolffd@0: This required fixing hmm_(2TBN)_inf_engine/marginal_nodes\family,
wolffd@0: and updating installC.
wolffd@0:
wolffd@0:
wolffd@0:
- Removed jtree_C_inf_engine and jtree_C_dbn_inf_engine.
wolffd@0: The former is basically the same as using jtree_inf_engine with
wolffd@0: mutliply_by_table.c and marginalize_table.c.
wolffd@0: The latter benefited slightly by assuming potentials were tables
wolffd@0: (arrays not objects), but these negligible savings don't justify the
wolffd@0: complexity and code duplication.
wolffd@0:
wolffd@0:
- Removed stab_cond_gauss_inf_engine and
wolffd@0: scg_unrolled_dbn_inf_engine,
wolffd@0: written by shan.huang@intel.com, since the code was buggy.
wolffd@0:
wolffd@0:
- Removed potential_engine, which was only experimental anyway.
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0: - 13 Nov 02
wolffd@0:
wolffd@0: - Released version 5.
wolffd@0: The previous version, released on 7/28/02, is available
wolffd@0: here.
wolffd@0:
wolffd@0:
- Moved code and documentation to MIT.
wolffd@0:
wolffd@0:
- Added repmat.c from Thomas Minka's lightspeed library.
wolffd@0: Modified it so it can return an empty matrix.
wolffd@0:
wolffd@0:
- Tomas Kocka fixed bug in the BDeu option for tabular_CPD,
wolffd@0: and contributed graph/dag_to_eg, to convert to essential graphs.
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0:
- Modified definition of hhmmQ_CPD, so that Qps can now accept
wolffd@0: parents in either the current or previous slice.
wolffd@0:
wolffd@0:
- Added hhmm2Q_CPD class, which is simpler than hhmmQ (no embedded
wolffd@0: sub CPDs, etc), and which allows the conditioning parents, Qps, to
wolffd@0: be before (in the topological ordering) the F or Q(t-1) nodes.
wolffd@0: See BNT/examples/dynamic/HHMM/Map/mk_map_hhmm for an example.
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0: - 7/28/02
wolffd@0:
wolffd@0: - Changed graph/best_first_elim_order from min-fill to min-weight.
wolffd@0:
- Ernest Chan fixed bug in Kalman/sample_lds (G{i} becomes G{m} in
wolffd@0: line 61).
wolffd@0:
- Tal Blum fixed bug in HMM/init_ghmm (Q
wolffd@0: becomes K, the number of states).
wolffd@0:
- Fixed jtree_2tbn_inf_engine/set_fields so it correctly sets the
wolffd@0: maximize flag to 1 even in subengines.
wolffd@0:
- Gary Bradksi did a simple mod to the PC struct learn alg so you can pass it an
wolffd@0: adjacency matrix as a constraint. Also, CovMat.m reads a file and
wolffd@0: produces a covariance matrix.
wolffd@0:
- KNOWN BUG in CPDs/@hhmmQ_CPD/update_ess.m at line 72 caused by
wolffd@0: examples/dynamic/HHMM/Square/learn_square_hhmm_cts.m at line 57.
wolffd@0:
-
wolffd@0: The old version is available from www.cs.berkeley.edu/~murphyk/BNT.24june02.zip
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0: - 6/24/02
wolffd@0:
wolffd@0: - Renamed dag_to_dot as graph_to_dot and added support for
wolffd@0: undirected graphs.
wolffd@0:
- Changed syntax for HHMM CPD constructors: no need to specify d/D
wolffd@0: anymore,so they can be used for more complex models.
wolffd@0:
- Removed redundant first argument to mk_isolated_tabular_CPD.
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0: - 6/19/02
wolffd@0:
wolffd@0: -
wolffd@0: Fixed most probable explanation code.
wolffd@0: Replaced calc_mpe with find_mpe, which is now a method of certain
wolffd@0: inference engines, e.g., jtree, belprop.
wolffd@0: calc_mpe_global has become the find_mpe method of global_joint.
wolffd@0: calc_mpe_bucket has become the find_mpe method of var_elim.
wolffd@0: calc_mpe_dbn has become the find_mpe method of smoother.
wolffd@0: These routines now correctly find the jointly most probable
wolffd@0: explanation, instead of the marginally most probable assignments.
wolffd@0: See examples/static/mpe1\mpe2 and examples/dynamic/viterbi1
wolffd@0: for examples.
wolffd@0: Removed maximize flag from constructor and enter_evidence
wolffd@0: methods, since this no longer needs to be specified by the user.
wolffd@0:
wolffd@0:
- Rainer Deventer fixed in a bug in
wolffd@0: CPDs/@gaussian_CPD/udpate_ess.m:
wolffd@0: now, hidden_cps = any(hidden_bitv(cps)), whereas it used to be
wolffd@0: hidden_cps = all(hidden_bitv(cps)).
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0: - 5/29/02
wolffd@0:
wolffd@0: - CPDs/@gaussian_CPD/udpate_ess.m fixed WX,WXX,WXY (thanks to Rainer Deventer and
wolffd@0: Yohsuke Minowa for spotting the bug). Does the C version work??
wolffd@0:
- potentials/@cpot/mpot_to_cpot fixed K==0 case (thanks to Rainer Deventer).
wolffd@0:
- CPDs/@gaussian_CPD/log_prob_node now accepts non-cell array data
wolffd@0: on self (thanks to rishi for catching this).
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0: - 5/19/02
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0: - Wei Hu made the following changes.
wolffd@0:
wolffd@0: - Memory leak repair:
wolffd@0: a. distribute_evidence.c in static/@jtree_C directory
wolffd@0: b. distribute_evidence.c in static/@jtree_ndx directory
wolffd@0: c. marg_tablec. in Tables dir
wolffd@0:
wolffd@0:
- Add "@jtree_ndx_2TBN_inf_engine" in inference/online dir
wolffd@0:
wolffd@0:
- Add "@jtree_sparse_inf_engine" in inference/static dir
wolffd@0:
wolffd@0:
- Add "@jtree_sparse_2TBN_inf_engine" in inference/online dir
wolffd@0:
wolffd@0:
- Modify "tabular_CPD.m" in CPDs/@tabular_CPD dir , used for sparse
wolffd@0:
wolffd@0:
- In "@discrete_CPD" dir:
wolffd@0: a. modify "convert_to_pot.m", used for sparse
wolffd@0: b. add "convert_to_sparse_table.c"
wolffd@0:
wolffd@0:
- In "potentials/@dpot" dir:
wolffd@0: a. remove "divide_by_pot.c" and "multiply_by_pot.c"
wolffd@0: b. add "divide_by_pot.m" and "multiply_by_pot.m"
wolffd@0: c. modify "dpot.m", "marginalize_pot.m" and "normalize_pot.m"
wolffd@0:
wolffd@0:
- In "potentials/Tables" dir:
wolffd@0: a. modify mk_ndxB.c;(for speedup)
wolffd@0: b. add "mult_by_table.m",
wolffd@0: "divide_by_table.m",
wolffd@0: "divide_by_table.c",
wolffd@0: "marg_sparse_table.c",
wolffd@0: "mult_by_sparse_table.c",
wolffd@0: "divide_by_sparse_table.c".
wolffd@0:
wolffd@0:
- Modify "normalise.c" in misc dir, used for sparse.
wolffd@0:
wolffd@0:
- And, add discrete2, discrete3, filter2 and filter3 as test applications in test_BNT.m
wolffd@0: Modify installC.m
wolffd@0:
wolffd@0:
wolffd@0: - Kevin made the following changes related to strong junction
wolffd@0: trees:
wolffd@0:
wolffd@0: - jtree_inf_engin line 75:
wolffd@0: engine.root_clq = length(engine.cliques);
wolffd@0: the last clq is guaranteed to be a strong root
wolffd@0:
wolffd@0:
- dag_to_jtree line 38: [jtree, root, B, w] =
wolffd@0: cliques_to_jtree(cliques, ns);
wolffd@0: never call cliques_to_strong_jtree
wolffd@0:
wolffd@0:
- strong_elim_order: use Ilya's code instead of topological sorting.
wolffd@0:
wolffd@0:
wolffd@0: - Kevin fixed CPDs/@generic_CPD/learn_params, so it always passes
wolffd@0: in the correct hidden_bitv field to update_params.
wolffd@0:
wolffd@0:
.
wolffd@0:
wolffd@0:
wolffd@0: - 5/8/02
wolffd@0:
wolffd@0:
wolffd@0: - Jerod Weinman helped fix some bugs in HHMMQ_CPD/maximize_params.
wolffd@0:
wolffd@0:
- Removed broken online inference from hmm_inf_engine.
wolffd@0: It has been replaced filter_inf_engine, which can take hmm_inf_engine
wolffd@0: as an argument.
wolffd@0:
wolffd@0:
- Changed graph visualization function names.
wolffd@0: 'draw_layout' is now 'draw_graph',
wolffd@0: 'draw_layout_dbn' is now 'draw_dbn',
wolffd@0: 'plotgraph' is now 'dag_to_dot',
wolffd@0: 'plothmm' is now 'hmm_to_dot',
wolffd@0: added 'dbn_to_dot',
wolffd@0: 'mkdot' no longer exists': its functioality has been subsumed by dag_to_dot.
wolffd@0: The dot functions now all take optional args in string/value format.
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0: - 4/1/02
wolffd@0:
wolffd@0: - Added online inference classes.
wolffd@0: See BNT/inference/online and BNT/examples/dynamic/filter1.
wolffd@0: This is work in progress.
wolffd@0:
- Renamed cmp_inference to cmp_inference_dbn, and made its
wolffd@0: interface and behavior more similar to cmp_inference_static.
wolffd@0:
- Added field rep_of_eclass to bnet and dbn, to simplify
wolffd@0: parameter tying (see ~murphyk/Bayes/param_tieing.html).
wolffd@0:
- Added gmux_CPD (Gaussian mulitplexers).
wolffd@0: See BNT/examples/dynamic/SLAM/skf_data_assoc_gmux for an example.
wolffd@0:
- Modified the forwards sampling routines.
wolffd@0: general/sample_dbn and sample_bnet now take optional arguments as
wolffd@0: strings, and can sample with pre-specified evidence.
wolffd@0: sample_bnet can only generate a single sample, and it is always a cell
wolffd@0: array.
wolffd@0: sample_node can only generate a single sample, and it is always a
wolffd@0: scalar or vector.
wolffd@0: This eliminates the false impression that the function was
wolffd@0: ever vectorized (which was only true for tabular_CPDs).
wolffd@0: (Calling sample_bnet inside a for-loop is unlikely to be a bottleneck.)
wolffd@0:
- Updated usage.html's description of CPDs (gmux) and inference
wolffd@0: (added gibbs_sampling and modified the description of pearl).
wolffd@0:
- Modified BNT/Kalman/kalman_filter\smoother so they now optionally
wolffd@0: take an observed input (control) sequence.
wolffd@0: Also, optional arguments are now passed as strings.
wolffd@0:
- Removed BNT/examples/static/uci_data to save space.
wolffd@0:
wolffd@0:
wolffd@0: - 3/14/02
wolffd@0:
wolffd@0: - pearl_inf_engine now works for (vector) Gaussian nodes, as well
wolffd@0: as discrete. compute_pi has been renamed CPD_to_pi. compute_lambda_msg
wolffd@0: has been renamed CPD_to_lambda_msg. These are now implemented for
wolffd@0: the discrete_CPD class instead of tabular_CPD. noisyor and
wolffd@0: Gaussian have their own private implemenations.
wolffd@0: Created examples/static/Belprop subdirectory.
wolffd@0:
- Added examples/dynamic/HHMM/Motif.
wolffd@0:
- Added Matt Brand's entropic prior code.
wolffd@0:
- cmp_inference_static has changed. It no longer returns err. It
wolffd@0: can check for convergence. It can accept 'observed'.
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0: - 3/4/02
wolffd@0:
wolffd@0: - Fixed HHMM code. Now BNT/examples/dynamic/HHMM/mk_abcd_hhmm
wolffd@0: implements the example in the NIPS paper. See also
wolffd@0: Square/sample_square_hhmm_discrete and other files.
wolffd@0:
wolffd@0:
- Included Bhaskara Marthi's gibbs_sampling_inf_engine. Currently
wolffd@0: this only works if all CPDs are tabular and if you call installC.
wolffd@0:
wolffd@0:
- Modified Kalman/tracking_demo so it calls plotgauss2d instead of
wolffd@0: gaussplot.
wolffd@0:
wolffd@0:
- Included Sonia Leach's speedup of mk_rnd_dag.
wolffd@0: My version created all NchooseK subsets, and then picked among them. Sonia
wolffd@0: reorders the possible parents randomly and choose
wolffd@0: the first k. This saves on having to enumerate the large number of
wolffd@0: possible subsets before picking from one.
wolffd@0:
wolffd@0:
- Eliminated BNT/inference/static/Old, which contained some old
wolffd@0: .mexglx files which wasted space.
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0: - 2/15/02
wolffd@0:
wolffd@0: - Removed the netlab directory, since most of it was not being
wolffd@0: used, and it took up too much space (the goal is to have BNT.zip be
wolffd@0: less than 1.4MB, so if fits on a floppy).
wolffd@0: The required files have been copied into netlab2.
wolffd@0:
wolffd@0:
wolffd@0: - 2/14/02
wolffd@0:
wolffd@0: - Shan Huang fixed most (all?) of the bugs in his stable CG code.
wolffd@0: scg1-3 now work, but scg_3node and scg_unstable give different
wolffd@0: behavior than that reported in the Cowell book.
wolffd@0:
wolffd@0:
- I changed gaussplot so it plots an ellipse representing the
wolffd@0: eigenvectors of the covariance matrix, rather than numerically
wolffd@0: evaluating the density and using a contour plot; this
wolffd@0: is much faster and gives better pictures. The new function is
wolffd@0: called plotgauss2d in BNT/Graphics.
wolffd@0:
wolffd@0:
- Joni Alon fixed some small bugs:
wolffd@0: mk_dhmm_obs_lik called forwards with the wrong args, and
wolffd@0: add_BNT_to_path should quote filenames with spaces.
wolffd@0:
wolffd@0:
- I added BNT/stats2/myunidrnd which is called by learn_struct_mcmc.
wolffd@0:
wolffd@0:
- I changed BNT/potentials/@dpot/multiply_by_dpot so it now says
wolffd@0: Tbig.T(:) = Tbig.T(:) .* Ts(:);
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0: - 2/6/02
wolffd@0:
wolffd@0: - Added hierarchical HMMs. See BNT/examples/dynamic/HHMM and
wolffd@0: CPDs/@hhmmQ_CPD and @hhmmF_CPD.
wolffd@0:
- sample_dbn can now sample until a certain condition is true.
wolffd@0:
- Sonia Leach fixed learn_struct_mcmc and changed mk_nbrs_of_digraph
wolffd@0: so it only returns DAGs.
wolffd@0: Click here for details of her changes.
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0: - 2/4/02
wolffd@0:
wolffd@0: - Wei Hu fixed a bug in
wolffd@0: jtree_ndx_inf_engine/collect\distribute_evidence.c which failed when
wolffd@0: maximize=1.
wolffd@0:
-
wolffd@0: I fixed various bugs to do with conditional Gaussians,
wolffd@0: so mixexp3 now works (thansk to Gerry Fung
wolffd@0: for spotting the error). Specifically:
wolffd@0: Changed softmax_CPD/convert_to_pot so it now puts cts nodes in cdom, and no longer inherits
wolffd@0: this function from discrete_CPD.
wolffd@0: Changed root_CPD/convert_to_put so it puts self in cdom.
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0: - 1/31/02
wolffd@0:
wolffd@0: - Fixed log_lik_mhmm (thanks to ling chen
wolffd@0: for spotting the typo)
wolffd@0:
- Now many scripts in examples/static call cmp_inference_static.
wolffd@0: Also, SCG scripts have been simplified (but still don't work!).
wolffd@0:
- belprop and belprop_fg enter_evidence now returns [engine, ll,
wolffd@0: niter], with ll=0, so the order of the arguments is compatible with other engines.
wolffd@0:
- Ensured that all enter_evidence methods support optional
wolffd@0: arguments such as 'maximize', even if they ignore them.
wolffd@0:
- Added Wei Hu's potentials/Tables/rep_mult.c, which is used to
wolffd@0: totally eliminate all repmats from gaussian_CPD/update_ess.
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0: - 1/30/02
wolffd@0:
wolffd@0: - update_ess now takes hidden_bitv instead of hidden_self and
wolffd@0: hidden_ps. This allows gaussian_CPD to distinguish hidden discrete and
wolffd@0: cts parents. Now learn_params_em, as well as learn_params_dbn_em,
wolffd@0: passes in this info, for speed.
wolffd@0:
wolffd@0:
- gaussian_CPD update_ess is now vectorized for any case where all
wolffd@0: the continuous nodes are observed (eg., Gaussian HMMs, AR-HMMs).
wolffd@0:
wolffd@0:
- mk_dbn now automatically detects autoregressive nodes.
wolffd@0:
wolffd@0:
- hmm_inf_engine now uses indexes in marginal_nodes/family for
wolffd@0: speed. Marginal_ndoes can now only handle single nodes.
wolffd@0: (SDndx is hard-coded, to avoid the overhead of using marg_ndx,
wolffd@0: which is slow because of the case and global statements.)
wolffd@0:
wolffd@0:
- add_ev_to_dmarginal now retains the domain field.
wolffd@0:
wolffd@0:
- Wei Hu wrote potentials/Tables/repmat_and_mult.c, which is used to
wolffd@0: avoid some of the repmat's in gaussian_CPD/update_ess.
wolffd@0:
wolffd@0:
- installC now longer sets the global USEC, since USEC is set to 0
wolffd@0: by add_BNT_to_path, even if the C files have already been compiled
wolffd@0: in a previous session. Instead, gaussian_CPD checks to
wolffd@0: see if repmat_and_mult exists, and (bat1, chmm1, water1, water2)
wolffd@0: check to see if jtree_C_inf_engine/collect_evidence exists.
wolffd@0: Note that checking if a file exists is slow, so we do the check
wolffd@0: inside the gaussian_CPD constructor, not inside update_ess.
wolffd@0:
wolffd@0:
- uninstallC now deletes both .mex and .dll files, just in case I
wolffd@0: accidently ship a .zip file with binaries. It also deletes mex
wolffd@0: files from jtree_C_inf_engine.
wolffd@0:
wolffd@0:
- Now marginal_family for both jtree_limid_inf_engine and
wolffd@0: global_joint_inf_engine returns a marginal structure and
wolffd@0: potential, as required by solve_limid.
wolffd@0: Other engines (eg. jtree_ndx, hmm) are not required to return a potential.
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0: - 1/22/02
wolffd@0:
wolffd@0: - Added an optional argument to mk_bnet and mk_dbn which lets you
wolffd@0: add names to nodes. This uses the new assoc_array class.
wolffd@0:
wolffd@0:
- Added Yimin Zhang's (unfinished) classification/regression tree
wolffd@0: code to CPDs/tree_CPD.
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0: - 1/14/02
wolffd@0:
wolffd@0: - Incorporated some of Shan Huang's (still broken) stable CG code.
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0: - 1/9/02
wolffd@0:
wolffd@0: - Yimin Zhang vectorized @discrete_CPD/prob_node, which speeds up
wolffd@0: structure learning considerably. I fixed this to handle softmax CPDs.
wolffd@0:
wolffd@0:
- Shan Huang changed the stable conditional Gaussian code to handle
wolffd@0: vector-valued nodes, but it is buggy.
wolffd@0:
wolffd@0:
- I vectorized @gaussian_CPD/update_ess for a special case.
wolffd@0:
wolffd@0:
- Removed denom=min(1, ... Z) from gaussian_CPD/maximize_params
wolffd@0: (added to cope with negative temperature for entropic prior), which
wolffd@0: gives wrong results on mhmm1.
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0: - 1/7/02
wolffd@0:
wolffd@0:
wolffd@0: - Removed the 'xo' typo from mk_qmr_bnet.
wolffd@0:
wolffd@0:
- convert_dbn_CPDs_to_tables has been vectorized; it is now
wolffd@0: substantially faster to compute the conditional likelihood for long sequences.
wolffd@0:
wolffd@0:
- Simplified constructors for tabular_CPD and gaussian_CPD, so they
wolffd@0: now both only take the form CPD(bnet, i, ...) for named arguments -
wolffd@0: the CPD('self', i, ...) format is gone. Modified mk_fgraph_given_ev
wolffd@0: to use mk_isolated_tabular_CPD instead.
wolffd@0:
wolffd@0:
- Added entropic prior to tabular and Gaussian nodes.
wolffd@0: For tabular_CPD, changed name of arguments to the constructor to
wolffd@0: distinguish Dirichlet and entropic priors. In particular,
wolffd@0: tabular_CPD(bnet, i, 'prior', 2) is now
wolffd@0: tabular_CPD(bnet, i, 'prior_type', 'dirichlet', 'dirichlet_weight', 2).
wolffd@0:
wolffd@0:
- Added deterministic annealing to learn_params_dbn_em for use with
wolffd@0: entropic priors. The old format learn(engine, cases, max_iter) has
wolffd@0: been replaced by learn(engine, cases, 'max_iter', max_iter).
wolffd@0:
wolffd@0:
- Changed examples/dynamic/bat1 and kjaerulff1, since default
wolffd@0: equivalence classes have changed from untied to tied.
wolffd@0:
wolffd@0:
wolffd@0: - 12/30/01
wolffd@0:
wolffd@0: - DBN default equivalence classes for slice 2 has changed, so that
wolffd@0: now parameters are tied for nodes with 'equivalent' parents in slices
wolffd@0: 1 and 2 (e.g., observed leaf nodes). This essentially makes passing in
wolffd@0: the eclass arguments redundant (hooray!).
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0: - 12/20/01
wolffd@0:
wolffd@0: - Released version 4.
wolffd@0: Version 4 is considered a major new release
wolffd@0: since it is not completely backwards compatible with V3.
wolffd@0: Observed nodes are now specified when the bnet/dbn is created,
wolffd@0: not when the engine is created. This changes the interface to many of
wolffd@0: the engines, making the code no longer backwards compatible.
wolffd@0: Hence support for non-named optional arguments (BNT2 style) has also
wolffd@0: been removed; hence mk_dbn etc. requires arguments to be passed by name.
wolffd@0:
wolffd@0:
- Ilya Shpitser's C code for triangulation now compiles under
wolffd@0: Windows as well as Unix, thanks to Wei Hu.
wolffd@0:
wolffd@0:
- All the ndx engines have been combined, and now take an optional
wolffd@0: argument specifying what kind of index to use.
wolffd@0:
wolffd@0:
- learn_params_dbn_em is now more efficient:
wolffd@0: @tabular_CPD/update_ess for nodes whose families
wolffd@0: are hidden does not need need to call add_evidence_to_dmarginal, which
wolffd@0: is slow.
wolffd@0:
wolffd@0:
- Wei Hu fixed bug in jtree_ndxD, so now the matlab and C versions
wolffd@0: both work.
wolffd@0:
wolffd@0:
- dhmm_inf_engine replaces hmm_inf_engine, since the former can
wolffd@0: handle any kind of topology and is slightly more efficient. dhmm is
wolffd@0: extended to handle Gaussian, as well as discrete,
wolffd@0: observed nodes. The new hmm_inf_engine no longer supports online
wolffd@0: inference (which was broken anyway).
wolffd@0:
wolffd@0:
- Added autoregressive HMM special case to hmm_inf_engine for
wolffd@0: speed.
wolffd@0:
wolffd@0:
- jtree_ndxSD_dbn_inf_engine now computes likelihood of the
wolffd@0: evidence in a vectorized manner, where possible, just like
wolffd@0: hmm_inf_engine.
wolffd@0:
wolffd@0:
- Added mk_limid, and hence simplified mk_bnet and mk_dbn.
wolffd@0:
wolffd@0:
wolffd@0:
- Gaussian_CPD now uses 0.01*I prior on covariance matrix by
wolffd@0: default. To do ML estimation, set 'cov_prior_weight' to 0.
wolffd@0:
wolffd@0:
- Gaussian_CPD and tabular_CPD
wolffd@0: optional binary arguments are now set using 0/1 rather no 'no'/'yes'.
wolffd@0:
wolffd@0:
- Removed Shan Huang's PDAG and decomposable graph code, which will
wolffd@0: be put in a separate structure learning library.
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0: - 12/11/01
wolffd@0:
wolffd@0: - Wei Hu fixed jtree_ndx*_dbn_inf_engine and marg_table.c.
wolffd@0:
wolffd@0:
- Shan Huang contributed his implementation of stable conditional
wolffd@0: Gaussian code (Lauritzen 1999), and methods to search through the
wolffd@0: space of PDAGs (Markov equivalent DAGs) and undirected decomposable
wolffd@0: graphs. The latter is still under development.
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0: - 12/10/01
wolffd@0:
wolffd@0: - Included Wei Hu's new versions of the ndx* routines, which use
wolffd@0: integers instead of doubles. The new versions are about 5 times faster
wolffd@0: in C. In general, ndxSD is the best choice.
wolffd@0:
wolffd@0:
- Fixed misc/add_ev_to_dmarginal so it works with the ndx routines
wolffd@0: in bat1.
wolffd@0:
wolffd@0:
- Added calc_mpe_dbn to do Viterbi parsing.
wolffd@0:
wolffd@0:
- Updated dhmm_inf_engine so it computes marginals.
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0: - 11/23/01
wolffd@0:
wolffd@0: - learn_params now does MAP estimation (i.e., uses Dirichlet prior,
wolffd@0: if define). Thanks to Simon Keizer skeizer@cs.utwente.nl for spotting
wolffd@0: this.
wolffd@0:
- Changed plotgraph so it calls ghostview with the output of dotty,
wolffd@0: instead of converting from .ps to .tif. The resulting image is much
wolffd@0: easier to read.
wolffd@0:
- Fixed cgpot/multiply_by_pots.m.
wolffd@0:
- Wei Hu fixed ind2subv.c.
wolffd@0:
- Changed arguments to compute_joint_pot.
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0: - 11/1/01
wolffd@0:
wolffd@0: - Changed sparse to dense in @dpot/multiply_pots, because sparse
wolffd@0: arrays apparently cause a bug in the NT version of Matlab.
wolffd@0:
wolffd@0:
- Fixed the bug in gaussian_CPD/log_prob_node.m which
wolffd@0: incorrectly called the vectorized gaussian_prob with different means
wolffd@0: when there were continuous parents and more than one case.
wolffd@0: (Thanks to Dave Andre for finding this.)
wolffd@0:
wolffd@0:
- Fixed the bug in root_CPD/convert_to_pot which did not check for
wolffd@0: pot_type='g'.
wolffd@0: (Thanks to Dave Andre for finding this.)
wolffd@0:
wolffd@0:
- Changed calc_mpe and calc_mpe_global so they now return a cell array.
wolffd@0:
wolffd@0:
- Combine pearl and loopy_pearl into a single inference engine
wolffd@0: called 'pearl_inf_engine', which now takes optional arguments passed
wolffd@0: in using the name/value pair syntax.
wolffd@0: marginal_nodes/family now takes the optional add_ev argument (same as
wolffd@0: jtree), which is the opposite of the previous shrink argument.
wolffd@0:
wolffd@0:
- Created pearl_unrolled_dbn_inf_engine and "resurrected"
wolffd@0: pearl_dbn_inf_engine in a simplified (but still broken!) form.
wolffd@0:
wolffd@0:
- Wei Hi fixed the bug in ind2subv.c, so now ndxSD works.
wolffd@0: He also made C versions of ndxSD and ndxB, and added (the unfinished) ndxD.
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0: - 10/20/01
wolffd@0:
wolffd@0:
wolffd@0: - Removed the use_ndx option from jtree_inf,
wolffd@0: and created 2 new inference engines: jtree_ndxSD_inf_engine and
wolffd@0: jtree_ndxB_inf_engine.
wolffd@0: The former stores 2 sets of indices for the small and difference
wolffd@0: domains; the latter stores 1 set of indices for the big domain.
wolffd@0: In Matlab, the ndxB version is often significantly faster than ndxSD
wolffd@0: and regular jree, except when the clique size is large.
wolffd@0: When compiled to C, the difference between ndxB and ndxSD (in terms of
wolffd@0: speed) vanishes; again, both are faster than compiled jtree, except
wolffd@0: when the clique size is large.
wolffd@0: Note: ndxSD currently has a bug in it, so it gives the wrong results!
wolffd@0: (The DBN analogs are jtree_dbn_ndxSD_inf_engine and
wolffd@0: jtree_dbn_ndxB_inf_engine.)
wolffd@0:
wolffd@0:
- Removed duplicate files from the HMM and Kalman subdirectories.
wolffd@0: e.g., normalise is now only in BNT/misc, so when compiled to C, it
wolffd@0: masks the unique copy of the Matlab version.
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0: - 10/17/01
wolffd@0:
wolffd@0: - Fixed bugs introduced on 10/15:
wolffd@0: Renamed extract_gaussian_CPD_params_given_ev_on_dps.m to
wolffd@0: gaussian_CPD_params_given_dps.m since Matlab can't cope with such long
wolffd@0: names (this caused cg1 to fail). Fixed bug in
wolffd@0: gaussian_CPD/convert_to_pot, which now calls convert_to_table in the
wolffd@0: discrete case.
wolffd@0:
wolffd@0:
- Fixed bug in bk_inf_engine/marginal_nodes.
wolffd@0: The test 'if nodes < ss' is now
wolffd@0: 'if nodes <= ss' (bug fix due to Stephen seg_ma@hotmail.com)
wolffd@0:
wolffd@0:
- Simplified uninstallC.
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0: - 10/15/01
wolffd@0:
wolffd@0:
wolffd@0: - Added use_ndx option to jtree_inf and jtree_dbn_inf.
wolffd@0: This pre-computes indices for multiplying, dividing and marginalizing
wolffd@0: discrete potentials.
wolffd@0: This is like the old jtree_fast_inf_engine, but we use an extra level
wolffd@0: of indirection to reduce the number of indices needed (see
wolffd@0: uid_generator object).
wolffd@0: Sometimes this is faster than the original way...
wolffd@0: This is work in progress.
wolffd@0:
wolffd@0:
- The constructor for dpot no longer calls myreshape, which is very
wolffd@0: slow.
wolffd@0: But new dpots still must call myones.
wolffd@0: Hence discrete potentials are only sometimes 1D vectors (but should
wolffd@0: always be thought of as multi-D arrays). This is work in progress.
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0: - 10/6/01
wolffd@0:
wolffd@0: - Fixed jtree_dbn_inf_engine, and added kjaerulff1 to test this.
wolffd@0:
- Added option to jtree_inf_engine/marginal_nodes to return "full
wolffd@0: sized" marginals, even on observed nodes.
wolffd@0:
- Clustered BK in examples/dynamic/bat1 seems to be broken,
wolffd@0: so it has been commented out.
wolffd@0: BK will be re-implemented on top of jtree_dbn, which should much more
wolffd@0: efficient.
wolffd@0:
wolffd@0:
wolffd@0: - 9/25/01
wolffd@0:
wolffd@0: - jtree_dbn_inf_engine is now more efficient than calling BK with
wolffd@0: clusters = exact, since it only uses the interface nodes, instead of
wolffd@0: all of them, to maintain the belief state.
wolffd@0:
- Uninstalled the broken C version of strong_elim_order.
wolffd@0:
- Changed order of arguments to unroll_dbn_topology, so that intra1
wolffd@0: is no longer required.
wolffd@0:
- Eliminated jtree_onepass, which can be simulated by calling
wolffd@0: collect_evidence on jtree.
wolffd@0:
- online1 is no longer in the test_BNT suite, since there is some
wolffd@0: problem with online prediction with mixtures of Gaussians using BK.
wolffd@0: This functionality is no longer supported, since doing it properly is
wolffd@0: too much work.
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0: - 9/7/01
wolffd@0:
wolffd@0: - Added Ilya Shpitser's C triangulation code (43x faster!).
wolffd@0: Currently this only compiles under linux; windows support is being added.
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0: - 9/5/01
wolffd@0:
wolffd@0: - Fixed typo in CPDs/@tabular_kernel/convert_to_table (thanks,
wolffd@0: Philippe!)
wolffd@0:
- Fixed problems with clamping nodes in tabular_CPD, learn_params,
wolffd@0: learn_params_tabular, and bayes_update_params. See
wolffd@0: BNT/examples/static/learn1 for a demo.
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0: - 9/3/01
wolffd@0:
wolffd@0: - Fixed typo on line 87 of gaussian_CPD which caused error in cg1.m
wolffd@0:
- Installed Wei Hu's latest version of jtree_C_inf_engine, which
wolffd@0: can now compute marginals on any clique/cluster.
wolffd@0:
- Added Yair Weiss's code to compute the Bethe free energy
wolffd@0: approximation to the log likelihood in loopy_pearl (still need to add
wolffd@0: this to belprop). The return arguments are now: engine, loglik and
wolffd@0: niter, which is different than before.
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0: - 8/30/01
wolffd@0:
wolffd@0: - Fixed bug in BNT/examples/static/id1 which passed hard-coded
wolffd@0: directory name to belprop_inf_engine.
wolffd@0:
wolffd@0:
- Changed tabular_CPD and gaussian_CPD so they can now be created
wolffd@0: without having to pass in a bnet.
wolffd@0:
wolffd@0:
- Finished mk_fgraph_given_ev. See the fg* files in examples/static
wolffd@0: for demos of factor graphs (work in progress).
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0: - 8/22/01
wolffd@0:
wolffd@0:
wolffd@0: - Removed jtree_compiled_inf_engine,
wolffd@0: since the C code it generated was so big that it would barf on large
wolffd@0: models.
wolffd@0:
wolffd@0:
- Tidied up the potentials/Tables directory.
wolffd@0: Removed mk_marg/mult_ndx.c,
wolffd@0: which have been superceded by the much faster mk_marg/mult_index.c
wolffd@0: (written by Wei Hu).
wolffd@0: Renamed the Matlab versions mk_marginalise/multiply_table_ndx.m
wolffd@0: to be mk_marg/mult_index.m to be compatible with the C versions.
wolffd@0: Note: nobody calls these routines anymore!
wolffd@0: (jtree_C_inf_engine/enter_softev.c has them built-in.)
wolffd@0: Removed mk_ndx.c, which was only used by jtree_compiled.
wolffd@0: Removed mk_cluster_clq_ndx.m, mk_CPD_clq_ndx, and marginalise_table.m
wolffd@0: which were not used.
wolffd@0: Moved shrink_obs_dims_in_table.m to misc.
wolffd@0:
wolffd@0:
- In potentials/@dpot directory: removed multiply_by_pot_C_old.c.
wolffd@0: Now marginalize_pot.c can handle maximization,
wolffd@0: and divide_by_pot.c has been implmented.
wolffd@0: marginalize/multiply/divide_by_pot.m no longer have useC or genops options.
wolffd@0: (To get the C versions, use installC.m)
wolffd@0:
wolffd@0:
- Removed useC and genops options from jtree_inf_engine.m
wolffd@0: To use the C versions, install the C code.
wolffd@0:
wolffd@0:
- Updated BNT/installC.m.
wolffd@0:
wolffd@0:
- Added fclose to @loopy_pearl_inf/enter_evidence.
wolffd@0:
wolffd@0:
- Changes to MPE routines in BNT/general.
wolffd@0: The maximize parameter is now specified inside enter_evidence
wolffd@0: instead of when the engine is created.
wolffd@0: Renamed calc_mpe_given_inf_engine to just calc_mpe.
wolffd@0: Added Ron Zohar's optional fix to handle the case of ties.
wolffd@0: Now returns log-likelihood instead of likelihood.
wolffd@0: Added calc_mpe_global.
wolffd@0: Removed references to genops in calc_mpe_bucket.m
wolffd@0: Test file is now called mpe1.m
wolffd@0:
wolffd@0:
- For DBN inference, filter argument is now passed by name,
wolffd@0: as is maximize. This is NOT BACKWARDS COMPATIBLE.
wolffd@0:
wolffd@0:
- Removed @loopy_dbn_inf_engine, which will was too complicated.
wolffd@0: In the future, a new version, which applies static loopy to the
wolffd@0: unrolled DBN, will be provided.
wolffd@0:
wolffd@0:
- discrete_CPD class now contains the family sizes and supports the
wolffd@0: method dom_sizes. This is because it could not access the child field
wolffd@0: CPD.sizes, and mysize(CPT) may give the wrong answer.
wolffd@0:
wolffd@0:
- Removed all functions of the form CPD_to_xxx, where xxx = dpot, cpot,
wolffd@0: cgpot, table, tables. These have been replaced by convert_to_pot,
wolffd@0: which takes a pot_type argument.
wolffd@0: @discrete_CPD calls convert_to_table to implement a default
wolffd@0: convert_to_pot.
wolffd@0: @discrete_CPD calls CPD_to_CPT to implement a default
wolffd@0: convert_to_table.
wolffd@0: The convert_to_xxx routines take fewer arguments (no need to pass in
wolffd@0: the globals node_sizes and cnodes!).
wolffd@0: Eventually, convert_to_xxx will be vectorized, so it will operate on
wolffd@0: all nodes in the same equivalence class "simultaneously", which should
wolffd@0: be significantly quicker, at least for Gaussians.
wolffd@0:
wolffd@0:
- Changed discrete_CPD/sample_node and prob_node to use
wolffd@0: convert_to_table, instead of CPD_to_CPT, so mlp/softmax nodes can
wolffd@0: benefit.
wolffd@0:
wolffd@0:
- Removed @tabular_CPD/compute_lambda_msg_fast and
wolffd@0: private/prod_CPD_and_pi_msgs_fast, since no one called them.
wolffd@0:
wolffd@0:
- Renamed compute_MLE to learn_params,
wolffd@0: by analogy with bayes_update_params (also because it may compute an
wolffd@0: MAP estimate).
wolffd@0:
wolffd@0:
- Renamed set_params to set_fields
wolffd@0: and get_params to get_field for CPD and dpot objects, to
wolffd@0: avoid confusion with the parameters of the CPD.
wolffd@0:
wolffd@0:
- Removed inference/doc, which has been superceded
wolffd@0: by the web page.
wolffd@0:
wolffd@0:
- Removed inference/static/@stab_cond_gauss_inf_engine, which is
wolffd@0: broken, and all references to stable CG.
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0: - 8/12/01
wolffd@0:
wolffd@0: - I removed potentials/@dpot/marginalize_pot_max.
wolffd@0: Now marginalize_pot for all potential classes take an optional third
wolffd@0: argument, specifying whether to sum out or max out.
wolffd@0: The dpot class also takes in optional arguments specifying whether to
wolffd@0: use C or genops (the global variable USE_GENOPS has been eliminated).
wolffd@0:
wolffd@0:
- potentials/@dpot/marginalize_pot has been simplified by assuming
wolffd@0: that 'onto' is always in ascending order (i.e., we remove
wolffd@0: Maynard-Reid's patch). This is to keep the code identical to the C
wolffd@0: version and the other class implementations.
wolffd@0:
wolffd@0:
- Added Ron Zohar's general/calc_mpe_bucket function,
wolffd@0: and my general/calc_mpe_given_inf_engine, for calculating the most
wolffd@0: probable explanation.
wolffd@0:
wolffd@0:
wolffd@0:
- Added Wei Hu's jtree_C_inf_engine.
wolffd@0: enter_softev.c is about 2 times faster than enter_soft_evidence.m.
wolffd@0:
wolffd@0:
- Added the latest version of jtree_compiled_inf_engine by Wei Hu.
wolffd@0: The 'C' ndx_method now calls potentials/Tables/mk_marg/mult_index,
wolffd@0: and the 'oldC' ndx_method calls potentials/Tables/mk_marg/mult_ndx.
wolffd@0:
wolffd@0:
- Added potentials/@dpot/marginalize_pot_C.c and
wolffd@0: multiply_by_pot_C.c by Wei Hu.
wolffd@0: These can be called by setting the 'useC' argument in
wolffd@0: jtree_inf_engine.
wolffd@0:
wolffd@0:
- Added BNT/installC.m to compile all the mex files.
wolffd@0:
wolffd@0:
- Renamed prob_fully_instantiated_bnet to log_lik_complete.
wolffd@0:
wolffd@0:
- Added Shan Huang's unfinished stable conditional Gaussian
wolffd@0: inference routines.
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0: - 7/13/01
wolffd@0:
wolffd@0: - Added the latest version of jtree_compiled_inf_engine by Wei Hu.
wolffd@0:
- Added the genops class by Doug Schwarz (see
wolffd@0: BNT/genopsfun/README). This provides a 1-2x speed-up of
wolffd@0: potentials/@dpot/multiply_by_pot and divide_by_pot.
wolffd@0:
- The function BNT/examples/static/qmr_compiled compares the
wolffd@0: performance gains of these new functions.
wolffd@0:
wolffd@0:
wolffd@0: - 7/6/01
wolffd@0:
wolffd@0: - Made bk_inf_engine use the name/value argument syntax. This can
wolffd@0: now do max-product (Viterbi) as well as sum-product
wolffd@0: (forward-backward).
wolffd@0:
- Changed examples/static/mfa1 to use the new name/value argument
wolffd@0: syntax.
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0: - 6/28/01
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0: - Released version 3.
wolffd@0: Version 3 is considered a major new release
wolffd@0: since it is not completely backwards compatible with V2.
wolffd@0: V3 supports decision and utility nodes, loopy belief propagation on
wolffd@0: general graphs (including undirected), structure learning for non-tabular nodes,
wolffd@0: a simplified way of handling optional
wolffd@0: arguments to functions,
wolffd@0: and many other features which are described below.
wolffd@0: In addition, the documentation has been substantially rewritten.
wolffd@0:
wolffd@0:
- The following functions can now take optional arguments specified
wolffd@0: as name/value pairs, instead of passing arguments in a fixed order:
wolffd@0: mk_bnet, jtree_inf_engine, tabular_CPD, gaussian_CPD, softmax_CPD, mlp_CPD,
wolffd@0: enter_evidence.
wolffd@0: This is very helpful if you want to use default values for most parameters.
wolffd@0: The functions remain backwards compatible with BNT2.
wolffd@0:
wolffd@0:
- dsoftmax_CPD has been renamed softmax_CPD, replacing the older
wolffd@0: version of softmax. The directory netlab2 has been updated, and
wolffd@0: contains weighted versions of some of the learning routines in netlab.
wolffd@0: (This code is still being developed by P. Brutti.)
wolffd@0:
wolffd@0:
- The "fast" versions of the inference engines, which generated
wolffd@0: matlab code, have been removed.
wolffd@0: @jtree_compiled_inf_engine now generates C code.
wolffd@0: (This feature is currently being developed by Wei Hu of Intel (China),
wolffd@0: and is not yet ready for public use.)
wolffd@0:
wolffd@0:
- CPD_to_dpot, CPD_to_cpot, CPD_to_cgpot and CPD_to_upot
wolffd@0: are in the process of being replaced by convert_to_pot.
wolffd@0:
wolffd@0:
- determine_pot_type now takes as arguments (bnet, onodes)
wolffd@0: instead of (onodes, cnodes, dag),
wolffd@0: so it can detect the presence of utility nodes as well as continuous
wolffd@0: nodes.
wolffd@0: Hence this function is not backwards compatible with BNT2.
wolffd@0:
wolffd@0:
- The structure learning code (K2, mcmc) now works with any node
wolffd@0: type, not just tabular.
wolffd@0: mk_bnets_tabular has been eliminated.
wolffd@0: bic_score_family and dirichlet_score_family will be replaced by score_family.
wolffd@0: Note: learn_struct_mcmc has a new interface that is not backwards
wolffd@0: compatible with BNT2.
wolffd@0:
wolffd@0:
- update_params_complete has been renamed bayes_update_params.
wolffd@0: Also, learn_params_tabular has been replaced by learn_params, which
wolffd@0: works for any CPD type.
wolffd@0:
wolffd@0:
- Added decision/utility nodes.
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0: - 6/6/01
wolffd@0:
wolffd@0: - Added soft evidence to jtree_inf_engine.
wolffd@0:
- Changed the documentation slightly (added soft evidence and
wolffd@0: parameter tying, and separated parameter and structure learning).
wolffd@0:
- Changed the parameters of determine_pot_type, so it no longer
wolffd@0: needs to be passed a DAG argument.
wolffd@0:
- Fixed parameter tying in mk_bnet (num. CPDs now equals num. equiv
wolffd@0: classes).
wolffd@0:
- Made learn_struct_mcmc work in matlab version 5.2 (thanks to
wolffd@0: Nimrod Megiddo for finding this bug).
wolffd@0:
- Made 'acyclic.m' work for undirected graphs.
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0: - 5/23/01
wolffd@0:
wolffd@0: - Added Tamar Kushnir's code for the IC* algorithm
wolffd@0: (learn_struct_pdag_ic_star). This learns the
wolffd@0: structure of a PDAG, and can identify the presence of latent
wolffd@0: variables.
wolffd@0:
wolffd@0:
- Added Yair Weiss's code for computing the MAP assignment using
wolffd@0: junction tree (i.e., a new method called @dpot/marginalize_pot_max
wolffd@0: instead of marginalize_pot.)
wolffd@0:
wolffd@0:
- Added @discrete_CPD/prob_node in addition to log_prob_node to handle
wolffd@0: deterministic CPDs.
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0: - 5/12/01
wolffd@0:
wolffd@0: - Pierpaolo Brutti updated his mlp and dsoftmax CPD classes,
wolffd@0: and improved the HME code.
wolffd@0:
wolffd@0:
- HME example now added to web page. (The previous example was non-hierarchical.)
wolffd@0:
wolffd@0:
- Philippe Leray (author of the French documentation for BNT)
wolffd@0: pointed out that I was including netlab.tar unnecessarily.
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0: - 5/4/01
wolffd@0:
wolffd@0: - Added mlp_CPD which defines a CPD as a (conditional) multi-layer perceptron.
wolffd@0: This class was written by Pierpaolo Brutti.
wolffd@0:
wolffd@0:
- Added hierarchical mixtures of experts demo (due to Pierpaolo Brutti).
wolffd@0:
wolffd@0:
- Fixed some bugs in dsoftmax_CPD.
wolffd@0:
wolffd@0:
- Now the BNT distribution includes the whole
wolffd@0: Netlab library in a
wolffd@0: subdirectory.
wolffd@0: It also includes my HMM and Kalman filter toolboxes, instead of just
wolffd@0: fragments of them.
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0: - 5/2/01
wolffd@0:
wolffd@0: - gaussian_inf_engine/enter_evidence now correctly returns the
wolffd@0: loglik, even if all nodes are instantiated (bug fix due to
wolffd@0: Michael Robert James).
wolffd@0:
wolffd@0:
- Added dsoftmax_CPD which allows softmax nodes to have discrete
wolffd@0: and continuous parents; the discrete parents act as indices into the
wolffd@0: parameters for the continuous node, by analogy with conditional
wolffd@0: Gaussian nodes. This class was written by Pierpaolo Brutti.
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0: - 3/27/01
wolffd@0:
wolffd@0: - learn_struct_mcmc no longer returns sampled_bitv.
wolffd@0:
- Added mcmc_sample_to_hist to post-process the set of samples.
wolffd@0:
wolffd@0:
wolffd@0: - 3/21/01
wolffd@0:
wolffd@0: - Changed license from UC to GNU Library GPL.
wolffd@0:
wolffd@0:
- Made all CPD constructors accept 0 arguments, so now bnets can be
wolffd@0: saved to and loaded from files.
wolffd@0:
wolffd@0:
- Improved the implementation of sequential and batch Bayesian
wolffd@0: parameter learning for tabular CPDs with completely observed data (see
wolffd@0: log_marg_lik_complete and update_params_complete). This code also
wolffd@0: handles interventional data.
wolffd@0:
wolffd@0:
- Added MCMC structure learning for completely observed, discrete,
wolffd@0: static BNs.
wolffd@0:
wolffd@0:
- Started implementing Bayesian estimation of linear Gaussian
wolffd@0: nodes. See root_gaussian_CPD and
wolffd@0: linear_gaussian_CPD. The old gaussian_CPD class has not been changed.
wolffd@0:
wolffd@0:
- Renamed evaluate_CPD to log_prob_node, and simplified its
wolffd@0: arguments.
wolffd@0:
wolffd@0:
- Renamed sample_CPD to sample_node, simplified its
wolffd@0: arguments, and vectorized it.
wolffd@0:
wolffd@0:
- Renamed "learn_params_tabular" to "update_params_complete".
wolffd@0: This does Bayesian updating, but no longer computes the BIC score.
wolffd@0:
wolffd@0:
- Made routines for completely observed networks (sampling,
wolffd@0: complete data likelihood, etc.) handle cell arrays or regular arrays,
wolffd@0: which are faster.
wolffd@0: If some nodes are not scalars, or are hidden, you must use cell arrays.
wolffd@0: You must convert to a cell array before passing to an inference routine.
wolffd@0:
wolffd@0:
- Fixed bug in gaussian_CPD constructor. When creating CPD with
wolffd@0: more than 1 discrete parent with random parameters, the matrices were
wolffd@0: the wrong shape (Bug fix due to Xuejing Sun).
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0: - 11/24/00
wolffd@0:
wolffd@0: - Renamed learn_params and learn_params_dbn to learn_params_em/
wolffd@0: learn_params_dbn_em. The return arguments are now [bnet, LLtrace,
wolffd@0: engine] instead of [engine, LLtrace].
wolffd@0:
- Added structure learning code for static nets (K2, PC).
wolffd@0:
- Renamed learn_struct_inter_full_obs as learn_struct_dbn_reveal,
wolffd@0: and reimplemented it to make it simpler and faster.
wolffd@0:
- Added sequential Bayesian parameter learning (learn_params_tabular).
wolffd@0:
- Major rewrite of the documentation.
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0: - 5/22/00
wolffd@0:
wolffd@0: - Added online filtering and prediction.
wolffd@0:
- Added the factored frontier and loopy_dbn algorithms.
wolffd@0:
- Separated the online user manual into two, for static and dynamic
wolffd@0: networks.
wolffd@0:
wolffd@0:
- Added a counter to the BNT web page.
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0: - 4/27/00
wolffd@0:
wolffd@0: - Fixed the typo in bat1.m
wolffd@0:
- Added preliminary code for online inference in DBNs
wolffd@0:
- Added coupled HMM example
wolffd@0:
wolffd@0:
wolffd@0: - 4/23/00
wolffd@0:
wolffd@0: - Fixed the bug in the fast inference routines where the indices
wolffd@0: are empty (arises in bat1.m).
wolffd@0:
- Sped up marginal_family for the fast engines by precomputing indices.
wolffd@0:
wolffd@0:
wolffd@0: - 4/17/00
wolffd@0:
wolffd@0: - Simplified implementation of BK_inf_engine by using soft evidence.
wolffd@0:
- Added jtree_onepass_inf_engine (which computes a single marginal)
wolffd@0: and modified jtree_dbn_fast to use it.
wolffd@0:
wolffd@0:
wolffd@0: - 4/14/00
wolffd@0:
wolffd@0: - Added fast versions of jtree and BK, which are
wolffd@0: designed for models where the division into hidden/observed is fixed,
wolffd@0: and all hidden variables are discrete. These routines are 2-3 times
wolffd@0: faster than their non-fast counterparts.
wolffd@0:
wolffd@0:
- Added graph drawing code
wolffd@0: contributed by Ali Taylan Cemgil from the University of Nijmegen.
wolffd@0:
wolffd@0:
wolffd@0: - 4/10/00
wolffd@0:
wolffd@0: - Distinguished cnodes and cnodes_slice in DBNs so that kalman1
wolffd@0: works with BK.
wolffd@0:
- Removed dependence on cellfun (which only exists in matlab 5.3)
wolffd@0: by adding isemptycell. Now the code works in 5.2.
wolffd@0:
- Changed the UC copyright notice.
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0: - 3/29/00
wolffd@0:
wolffd@0: - Released BNT 2.0, now with objects!
wolffd@0: Here are the major changes.
wolffd@0:
wolffd@0:
- There are now 3 classes of objects in BNT:
wolffd@0: Conditional Probability Distributions, potentials (for junction tree),
wolffd@0: and inference engines.
wolffd@0: Making an inference algorithm (junction tree, sampling, loopy belief
wolffd@0: propagation, etc.) an object might seem counter-intuitive, but in
wolffd@0: fact turns out to be a good idea, since the code and documentation
wolffd@0: can be made modular.
wolffd@0: (In Java, each algorithm would be a class that implements the
wolffd@0: inferenceEngine interface. Since Matlab doesn't support interfaces,
wolffd@0: inferenceEngine is an abstract (virtual) base class.)
wolffd@0:
wolffd@0:
wolffd@0:
-
wolffd@0: In version 1, instead of Matlab's built-in objects,
wolffd@0: I used structs and a
wolffd@0: simulated dispatch mechanism based on the type-tag system in the
wolffd@0: classic textbook by Abelson
wolffd@0: and Sussman ("Structure and Interpretation of Computer Programs",
wolffd@0: MIT Press, 1985). This required editing the dispatcher every time a
wolffd@0: new object type was added. It also required unique (and hence long)
wolffd@0: names for each method, and allowed the user unrestricted access to
wolffd@0: the internal state of objects.
wolffd@0:
wolffd@0:
wolffd@0:
- The Bayes net itself is now a lightweight struct, and can be
wolffd@0: used to specify a model independently of the inference algorithm used
wolffd@0: to process it.
wolffd@0: In version 1, the inference engine was stored inside the Bayes net.
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0: - 11/24/99
wolffd@0:
wolffd@0: - Added fixed lag smoothing, online EM and the ability to learn
wolffd@0: switching HMMs (POMDPs) to the HMM toolbox.
wolffd@0:
- Renamed the HMM toolbox function 'mk_dhmm_obs_mat' to
wolffd@0: 'mk_dhmm_obs_lik', and similarly for ghmm and mhmm. Updated references
wolffd@0: to these functions in BNT.
wolffd@0:
- Changed the order of return params from kalman_filter to make it
wolffd@0: more natural. Updated references to this function in BNT.
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0: - 10/27/99
wolffd@0:
wolffd@0: - Fixed line 42 of potential/cg/marginalize_cgpot and lines 32-39 of bnet/add_evidence_to_marginal
wolffd@0: (thanks to Rainer Deventer for spotting these bugs!)
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0: - 10/21/99
wolffd@0:
wolffd@0: - Completely changed the blockmatrix class to make its semantics
wolffd@0: more sensible. The constructor is not backwards compatible!
wolffd@0:
wolffd@0:
wolffd@0: - 10/6/99
wolffd@0:
wolffd@0: - Fixed all_vals = cat(1, vals{:}) in user/enter_evidence
wolffd@0:
- Vectorized ind2subv and sub2indv and removed the C versions.
wolffd@0:
- Made mk_CPT_from_mux_node much faster by having it call vectorized
wolffd@0: ind2subv
wolffd@0:
- Added Sondhauss's bug fix to line 68 of bnet/add_evidence_to_marginal
wolffd@0:
- In dbn/update_belief_state, instead of adding eps to likelihood if 0,
wolffd@0: we leave it at 0, and set the scale factor to 0 instead of dividing.
wolffd@0:
wolffd@0:
wolffd@0: - 8/19/99
wolffd@0:
wolffd@0: - Added Ghahramani's mfa code to examples directory to compare with
wolffd@0: fa1, which uses BNT
wolffd@0:
- Changed all references of assoc to stringmatch (e.g., in
wolffd@0: examples/mk_bat_topology)
wolffd@0:
wolffd@0:
wolffd@0: - June 1999
wolffd@0:
wolffd@0: - Released BNT 1.0 on the web.
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0: - August 1998
wolffd@0:
wolffd@0: - Released BNT 0.0 via email.
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0: - October 1997
wolffd@0:
wolffd@0: - First started working on Matlab version of BNT.
wolffd@0:
wolffd@0:
wolffd@0: - Summer 1997
wolffd@0:
wolffd@0: - First started working on C++ version of BNT while working at DEC (now Compaq) CRL.
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0: