wolffd@0
|
1 <title>History of changes to BNT</title>
|
wolffd@0
|
2 <h1>History of changes to BNT</h1>
|
wolffd@0
|
3
|
wolffd@0
|
4
|
wolffd@0
|
5 <h2>Changes since 4 Oct 2007</h2>
|
wolffd@0
|
6
|
wolffd@0
|
7 <pre>
|
wolffd@0
|
8 - 19 Oct 07 murphyk
|
wolffd@0
|
9
|
wolffd@0
|
10 * BNT\CPDs\@noisyor_CPD\CPD_to_CPT.m: 2nd half of the file is a repeat
|
wolffd@0
|
11 of the first half and was deleted (thanks to Karl Kuschner)
|
wolffd@0
|
12
|
wolffd@0
|
13 * KPMtools\myismember.m should return logical for use in "assert" so add line at end
|
wolffd@0
|
14 p=logical(p); this prevents "assert" from failing on an integer input.
|
wolffd@0
|
15 (thanks to Karl Kuschner)
|
wolffd@0
|
16
|
wolffd@0
|
17
|
wolffd@0
|
18
|
wolffd@0
|
19 - 17 Oct 07 murphyk
|
wolffd@0
|
20
|
wolffd@0
|
21 * Updated subv2ind and ind2subv in KPMtools to Tom Minka's implementation.
|
wolffd@0
|
22 His ind2subv is faster (vectorized), but I had to modify it so it
|
wolffd@0
|
23 matched the behavior of my version when called with siz=[].
|
wolffd@0
|
24 His subv2inv is slightly simpler than mine because he does not treat
|
wolffd@0
|
25 the siz=[2 2 ... 2] case separately.
|
wolffd@0
|
26 Note: there is now no need to ever use the C versions of these
|
wolffd@0
|
27 functions (or any others, for that matter).
|
wolffd@0
|
28
|
wolffd@0
|
29 * removed BNT/add_BNT_to_path since no longer needed.
|
wolffd@0
|
30
|
wolffd@0
|
31
|
wolffd@0
|
32
|
wolffd@0
|
33 - 4 Oct 07 murphyk
|
wolffd@0
|
34
|
wolffd@0
|
35 * moved code from sourceforge to UBC website, made version 1.0.4
|
wolffd@0
|
36
|
wolffd@0
|
37 * @pearl_inf_engine/pearl_inf_engine line 24, default
|
wolffd@0
|
38 argument for protocol changed from [] to 'parallel'.
|
wolffd@0
|
39 Also, changed private/parallel_protocol so it doesn't write to an
|
wolffd@0
|
40 empty file id (Matlab 7 issue)
|
wolffd@0
|
41
|
wolffd@0
|
42 * added foptions (Matlab 7 issue)
|
wolffd@0
|
43
|
wolffd@0
|
44 * changed genpathKPM to exclude svn. Put it in toplevel directory to
|
wolffd@0
|
45 massively simplify the installation process.
|
wolffd@0
|
46
|
wolffd@0
|
47 </pre>
|
wolffd@0
|
48
|
wolffd@0
|
49
|
wolffd@0
|
50 <h2>Sourceforge changelog</h2>
|
wolffd@0
|
51
|
wolffd@0
|
52 BNT was first ported to sourceforge on 28 July 2001 by yozhik.
|
wolffd@0
|
53 BNT was removed from sourceforge on 4 October 2007 by Kevin Murphy;
|
wolffd@0
|
54 that version is cached as <a
|
wolffd@0
|
55 href="FullBNT-1.0.3.zip">FullBNT-1.0.3.zip</a>.
|
wolffd@0
|
56 See <a href="ChangeLog.Sourceforge.txt">Changelog from
|
wolffd@0
|
57 sourceforge</a> for a history of that version of the code,
|
wolffd@0
|
58 which formed the basis of the branch currently on Murphy's web page.
|
wolffd@0
|
59
|
wolffd@0
|
60
|
wolffd@0
|
61 <h2> Changes from August 1998 -- July 2004</h2>
|
wolffd@0
|
62
|
wolffd@0
|
63 Kevin Murphy made the following changes to his own private copy.
|
wolffd@0
|
64 (Other small changes were made between July 2004 and October 2007, but were
|
wolffd@0
|
65 not documented.)
|
wolffd@0
|
66 These may or may not be reflected in the sourceforge version of the
|
wolffd@0
|
67 code (which was independently maintained).
|
wolffd@0
|
68
|
wolffd@0
|
69
|
wolffd@0
|
70 <ul>
|
wolffd@0
|
71 <li> 9 June 2004
|
wolffd@0
|
72 <ul>
|
wolffd@0
|
73 <li> Changed tabular_CPD/learn_params back to old syntax, to make it
|
wolffd@0
|
74 compatible with gaussian_CPD/learn_params (and re-enabled
|
wolffd@0
|
75 generic_CPD/learn_params).
|
wolffd@0
|
76 Modified learning/learn_params.m and learning/score_family
|
wolffd@0
|
77 appropriately.
|
wolffd@0
|
78 (In particular, I undid the change Sonia Leach had to make to
|
wolffd@0
|
79 score_family to handle this asymmetry.)
|
wolffd@0
|
80 Added examples/static/gaussian2 to test this new functionality.
|
wolffd@0
|
81
|
wolffd@0
|
82 <li> Added bp_mrf2 (for generic pairwise MRFs) to
|
wolffd@0
|
83 inference/static/@bp_belprop_mrf2_inf_engine. [MRFs are not
|
wolffd@0
|
84 "officially" supported in BNT, so this code is just for expert
|
wolffd@0
|
85 hackers.]
|
wolffd@0
|
86
|
wolffd@0
|
87 <li> Added examples/static/nodeorderExample.m to illustrate importance
|
wolffd@0
|
88 of using topological ordering.
|
wolffd@0
|
89
|
wolffd@0
|
90 <li> Ran dos2unix on all *.c files within BNT to eliminate compiler
|
wolffd@0
|
91 warnings.
|
wolffd@0
|
92
|
wolffd@0
|
93 </ul>
|
wolffd@0
|
94
|
wolffd@0
|
95 <li> 7 June 2004
|
wolffd@0
|
96 <ul>
|
wolffd@0
|
97 <li> Replaced normaliseC with normalise in HMM/fwdback, for maximum
|
wolffd@0
|
98 portability (and negligible loss in speed).
|
wolffd@0
|
99 <li> Ensured FullBNT versions of HMM, KPMstats etc were as up-to-date
|
wolffd@0
|
100 as stand-alone versions.
|
wolffd@0
|
101 <li> Changed add_BNT_to_path so it no longer uses addpath(genpath()),
|
wolffd@0
|
102 which caused Old versions of files to mask new ones.
|
wolffd@0
|
103 </ul>
|
wolffd@0
|
104
|
wolffd@0
|
105 <li> 18 February 2004
|
wolffd@0
|
106 <ul>
|
wolffd@0
|
107 <li> A few small bug fixes to BNT, as posted to the Yahoo group.
|
wolffd@0
|
108 <li> Several new functions added to KPMtools, KPMstats and Graphviz
|
wolffd@0
|
109 (none needed by BNT).
|
wolffd@0
|
110 <li> Added CVS to some of my toolboxes.
|
wolffd@0
|
111 </ul>
|
wolffd@0
|
112
|
wolffd@0
|
113 <li> 30 July 2003
|
wolffd@0
|
114 <ul>
|
wolffd@0
|
115 <li> qian.diao fixed @mpot/set_domain_pot and @cgpot/set_domain_pot
|
wolffd@0
|
116 <li> Marco Grzegorczyk found, and Sonia Leach fixed, a bug in
|
wolffd@0
|
117 do_removal inside learn_struct_mcmc
|
wolffd@0
|
118 </ul>
|
wolffd@0
|
119
|
wolffd@0
|
120
|
wolffd@0
|
121 <li> 28 July 2003
|
wolffd@0
|
122 <ul>
|
wolffd@0
|
123 <li> Sebastian Luehr provided 2 minor bug fixes, to HMM/fwdback (if any(scale==0))
|
wolffd@0
|
124 and FullBNT\HMM\CPDs\@hhmmQ_CPD\update_ess.m (wrong transpose).
|
wolffd@0
|
125 </ul>
|
wolffd@0
|
126
|
wolffd@0
|
127 <li> 8 July 2003
|
wolffd@0
|
128 <ul>
|
wolffd@0
|
129 <li> Removed buggy BNT/examples/static/MRF2/Old/mk_2D_lattice.m which was
|
wolffd@0
|
130 masking correct graph/mk_2D_lattice.
|
wolffd@0
|
131 <li> Fixed bug in graph/mk_2D_lattice_slow in the non-wrap-around case
|
wolffd@0
|
132 (line 78)
|
wolffd@0
|
133 </ul>
|
wolffd@0
|
134
|
wolffd@0
|
135
|
wolffd@0
|
136 <li> 2 July 2003
|
wolffd@0
|
137 <ul>
|
wolffd@0
|
138 <li> Sped up normalize(., 1) in KPMtools by avoiding general repmat
|
wolffd@0
|
139 <li> Added assign_cols and marginalize_table to KPMtools
|
wolffd@0
|
140 </ul>
|
wolffd@0
|
141
|
wolffd@0
|
142
|
wolffd@0
|
143 <li> 29 May 2003
|
wolffd@0
|
144 <ul>
|
wolffd@0
|
145 <li> Modified KPMstats/mixgauss_Mstep so it repmats Sigma in the tied
|
wolffd@0
|
146 covariance case (bug found by galt@media.mit.edu).
|
wolffd@0
|
147
|
wolffd@0
|
148 <li> Bob Welch found bug in gaussian_CPDs/maximize_params in the way
|
wolffd@0
|
149 cpsz was computed.
|
wolffd@0
|
150
|
wolffd@0
|
151 <li> Added KPMstats/mixgauss_em, because my code is easier to
|
wolffd@0
|
152 understand/modify than netlab's (at least for me!).
|
wolffd@0
|
153
|
wolffd@0
|
154 <li> Modified BNT/examples/dynamic/viterbi1 to call multinomial_prob
|
wolffd@0
|
155 instead of mk_dhmm_obs_lik.
|
wolffd@0
|
156
|
wolffd@0
|
157 <li> Moved parzen window and partitioned models code to KPMstats.
|
wolffd@0
|
158
|
wolffd@0
|
159 <li> Rainer Deventer fixed some bugs in his scgpot code, as follows:
|
wolffd@0
|
160 1. complement_pot.m
|
wolffd@0
|
161 Problems occured for probabilities equal to zero. The result is an
|
wolffd@0
|
162 division by zero error.
|
wolffd@0
|
163 <br>
|
wolffd@0
|
164 2. normalize_pot.m
|
wolffd@0
|
165 This function is used during the calculation of the log-likelihood.
|
wolffd@0
|
166 For a probability of zero a warning "log of zero" occurs. I have not
|
wolffd@0
|
167 realy fixed the bug. As a workaround I suggest to calculate the
|
wolffd@0
|
168 likelihhod based on realmin (the smallest real number) instead of
|
wolffd@0
|
169 zero.
|
wolffd@0
|
170 <br>
|
wolffd@0
|
171 3. recursive_combine_pots
|
wolffd@0
|
172 At the beginning of the function there was no test for the trivial case,
|
wolffd@0
|
173 which defines the combination of two potentials as equal to the direct
|
wolffd@0
|
174 combination. The result might be an infinite recursion which leads to
|
wolffd@0
|
175 a stack overflow in matlab.
|
wolffd@0
|
176 </ul>
|
wolffd@0
|
177
|
wolffd@0
|
178
|
wolffd@0
|
179
|
wolffd@0
|
180 <li> 11 May 2003
|
wolffd@0
|
181 <ul>
|
wolffd@0
|
182 <li> Fixed bug in gaussian_CPD/maximize_params so it is compatible
|
wolffd@0
|
183 with the new clg_Mstep routine
|
wolffd@0
|
184 <li> Modified KPMstats/cwr_em to handle single cluster case
|
wolffd@0
|
185 separately.
|
wolffd@0
|
186 <li> Fixed bug in netlab/gmminit.
|
wolffd@0
|
187 <li> Added hash tables to KPMtools.
|
wolffd@0
|
188 </ul>
|
wolffd@0
|
189
|
wolffd@0
|
190
|
wolffd@0
|
191 <li> 4 May 2003
|
wolffd@0
|
192 <ul>
|
wolffd@0
|
193 <li>
|
wolffd@0
|
194 Renamed many functions in KPMstats so the name of the
|
wolffd@0
|
195 distribution/model type comes first,
|
wolffd@0
|
196 Mstep_clg -> clg_Mstep,
|
wolffd@0
|
197 Mstep_cond_gauss -> mixgauss_Mstep.
|
wolffd@0
|
198 Also, renamed eval_pdf_xxx functions to xxx_prob, e.g.
|
wolffd@0
|
199 eval_pdf_cond_mixgauss -> mixgauss_prob.
|
wolffd@0
|
200 This is simpler and shorter.
|
wolffd@0
|
201
|
wolffd@0
|
202 <li>
|
wolffd@0
|
203 Renamed many functions in HMM toolbox so the name of the
|
wolffd@0
|
204 distribution/model type comes first,
|
wolffd@0
|
205 log_lik_mhmm -> mhmm_logprob, etc.
|
wolffd@0
|
206 mk_arhmm_obs_lik has finally been re-implemented in terms of clg_prob
|
wolffd@0
|
207 and mixgauss_prob (for slice 1).
|
wolffd@0
|
208 Removed the Demos directory, and put them in the main directory.
|
wolffd@0
|
209 This code is not backwards compatible.
|
wolffd@0
|
210
|
wolffd@0
|
211 <li> Removed some of the my_xxx functions from KPMstats (these were
|
wolffd@0
|
212 mostly copies of functions from the Mathworks stats toolbox).
|
wolffd@0
|
213
|
wolffd@0
|
214
|
wolffd@0
|
215 <li> Modified BNT to take into account changes to KPMstats and
|
wolffd@0
|
216 HMM toolboxes.
|
wolffd@0
|
217
|
wolffd@0
|
218 <li> Fixed KPMstats/Mstep_clg (now called clg_Mstep) for spherical Gaussian case.
|
wolffd@0
|
219 (Trace was wrongly parenthesised, and I used YY instead of YTY.
|
wolffd@0
|
220 The spherical case now gives the same result as the full case
|
wolffd@0
|
221 for cwr_demo.)
|
wolffd@0
|
222 Also, mixgauss_Mstep now adds 0.01 to the ML estimate of Sigma,
|
wolffd@0
|
223 to act as a regularizer (it used to add 0.01 to E[YY'], but this was
|
wolffd@0
|
224 ignored in the spherical case).
|
wolffd@0
|
225
|
wolffd@0
|
226 <li> Added cluster weighted regression to KPMstats.
|
wolffd@0
|
227
|
wolffd@0
|
228 <li> Added KPMtools/strmatch_substr.
|
wolffd@0
|
229 </ul>
|
wolffd@0
|
230
|
wolffd@0
|
231
|
wolffd@0
|
232
|
wolffd@0
|
233 <li> 28 Mar 03
|
wolffd@0
|
234 <ul>
|
wolffd@0
|
235 <li> Added mc_stat_distrib and eval_pdf_cond_prod_parzen to KPMstats
|
wolffd@0
|
236 <li> Fixed GraphViz/arrow.m incompatibility with matlab 6.5
|
wolffd@0
|
237 (replace all NaN's with 0).
|
wolffd@0
|
238 Modified GraphViz/graph_to_dot so it also works on windows.
|
wolffd@0
|
239 <li> I removed dag_to_jtree and added graph_to_jtree to the graph
|
wolffd@0
|
240 toolbox; the latter expects an undirected graph as input.
|
wolffd@0
|
241 <li> I added triangulate_2Dlattice_demo.m to graph.
|
wolffd@0
|
242 <li> Rainer Deventer fixed the stable conditional Gaussian potential
|
wolffd@0
|
243 classes (scgpot and scgcpot) and inference engine
|
wolffd@0
|
244 (stab_cond_gauss_inf_engine).
|
wolffd@0
|
245 <li> Rainer Deventer added (stable) higher-order Markov models (see
|
wolffd@0
|
246 inference/dynamic/@stable_ho_inf_engine).
|
wolffd@0
|
247 </ul>
|
wolffd@0
|
248
|
wolffd@0
|
249
|
wolffd@0
|
250 <li> 14 Feb 03
|
wolffd@0
|
251 <ul>
|
wolffd@0
|
252 <li> Simplified learning/learn_params so it no longer returns BIC
|
wolffd@0
|
253 score. Also, simplified @tabular_CPD/learn_params so it only takes
|
wolffd@0
|
254 local evidence.
|
wolffd@0
|
255 Added learn_params_dbn, which does ML estimation of fully observed
|
wolffd@0
|
256 DBNs.
|
wolffd@0
|
257 <li> Vectorized KPMstats/eval_pdf_cond_mixgauss for tied Sigma
|
wolffd@0
|
258 case (much faster!).
|
wolffd@0
|
259 Also, now works in log-domain to prevent underflow.
|
wolffd@0
|
260 eval_pdf_mixgauss now calls eval_pdf_cond_mixgauss and inherits these benefits.
|
wolffd@0
|
261 <li> add_BNT_to_path now calls genpath with 2 arguments if using
|
wolffd@0
|
262 matlab version 5.
|
wolffd@0
|
263 </ul>
|
wolffd@0
|
264
|
wolffd@0
|
265
|
wolffd@0
|
266 <li> 30 Jan 03
|
wolffd@0
|
267 <ul>
|
wolffd@0
|
268 <li> Vectorized KPMstats/eval_pdf_cond_mixgauss for scalar Sigma
|
wolffd@0
|
269 case (much faster!)
|
wolffd@0
|
270 <li> Renamed mk_dotfile_from_hmm to draw_hmm and moved it to the
|
wolffd@0
|
271 GraphViz library.
|
wolffd@0
|
272 <li> Rewrote @gaussian_CPD/maximize_params.m so it calls
|
wolffd@0
|
273 KPMstats/Mstep_clg.
|
wolffd@0
|
274 This fixes bug when using clamped means (found by Rainer Deventer
|
wolffd@0
|
275 and Victor Eruhimov)
|
wolffd@0
|
276 and a bug when using a Wishart prior (no gamma term in the denominator).
|
wolffd@0
|
277 It is also easier to read.
|
wolffd@0
|
278 I rewrote the technical report re-deriving all the equations in a
|
wolffd@0
|
279 clearer notation, making the solution to the bugs more obvious.
|
wolffd@0
|
280 (See www.ai.mit.edu/~murphyk/Papers/learncg.pdf)
|
wolffd@0
|
281 Modified Mstep_cond_gauss to handle priors.
|
wolffd@0
|
282 <li> Fixed bug reported by Ramgopal Mettu in which add_BNT_to_path
|
wolffd@0
|
283 calls genpath with only 1 argument, whereas version 5 requires 2.
|
wolffd@0
|
284 <li> Fixed installC and uninstallC to search in FullBNT/BNT.
|
wolffd@0
|
285 </ul>
|
wolffd@0
|
286
|
wolffd@0
|
287
|
wolffd@0
|
288 <li> 24 Jan 03
|
wolffd@0
|
289 <ul>
|
wolffd@0
|
290 <li> Major simplification of HMM code.
|
wolffd@0
|
291 The API is not backwards compatible.
|
wolffd@0
|
292 No new functionality has been added, however.
|
wolffd@0
|
293 There is now only one fwdback function, instead of 7;
|
wolffd@0
|
294 different behaviors are controlled through optional arguments.
|
wolffd@0
|
295 I renamed 'evaluate observation likelihood' (local evidence)
|
wolffd@0
|
296 to 'evaluate conditional pdf', since this is more general.
|
wolffd@0
|
297 i.e., renamed
|
wolffd@0
|
298 mk_dhmm_obs_lik to eval_pdf_cond_multinomial,
|
wolffd@0
|
299 mk_ghmm_obs_lik to eval_pdf_cond_gauss,
|
wolffd@0
|
300 mk_mhmm_obs_lik to eval_pdf_cond_mog.
|
wolffd@0
|
301 These functions have been moved to KPMstats,
|
wolffd@0
|
302 so they can be used by other toolboxes.
|
wolffd@0
|
303 ghmm's have been eliminated, since they are just a special case of
|
wolffd@0
|
304 mhmm's with M=1 mixture component.
|
wolffd@0
|
305 mixgauss HMMs can now handle a different number of
|
wolffd@0
|
306 mixture components per state.
|
wolffd@0
|
307 init_mhmm has been eliminated, and replaced with init_cond_mixgauss
|
wolffd@0
|
308 (in KPMstats) and mk_leftright/rightleft_transmat.
|
wolffd@0
|
309 learn_dhmm can no longer handle inputs (although this is easy to add back).
|
wolffd@0
|
310 </ul>
|
wolffd@0
|
311
|
wolffd@0
|
312
|
wolffd@0
|
313
|
wolffd@0
|
314
|
wolffd@0
|
315
|
wolffd@0
|
316 <li> 20 Jan 03
|
wolffd@0
|
317 <ul>
|
wolffd@0
|
318 <li> Added arrow.m to GraphViz directory, and commented out line 922,
|
wolffd@0
|
319 in response to a bug report.
|
wolffd@0
|
320 </ul>
|
wolffd@0
|
321
|
wolffd@0
|
322 <li> 18 Jan 03
|
wolffd@0
|
323 <ul>
|
wolffd@0
|
324 <li> Major restructuring of BNT file structure:
|
wolffd@0
|
325 all code that is not specific to Bayes nets has been removed;
|
wolffd@0
|
326 these packages must be downloaded separately. (Or just download FullBNT.)
|
wolffd@0
|
327 This makes it easier to ensure different toolboxes are consistent.
|
wolffd@0
|
328 misc has been slimmed down and renamed KPMtools, so it can be shared by other toolboxes,
|
wolffd@0
|
329 such as HMM and Kalman; some of the code has been moved to BNT/general.
|
wolffd@0
|
330 The Graphics directory has been slimmed down and renamed GraphViz.
|
wolffd@0
|
331 The graph directory now has no dependence on BNT (dag_to_jtree has
|
wolffd@0
|
332 been renamed graph_to_jtree and has a new API).
|
wolffd@0
|
333 netlab2 no longer contains any netlab files, only netlab extensions.
|
wolffd@0
|
334 None of the functionality has changed.
|
wolffd@0
|
335 </ul>
|
wolffd@0
|
336
|
wolffd@0
|
337
|
wolffd@0
|
338
|
wolffd@0
|
339 <li> 11 Jan 03
|
wolffd@0
|
340 <ul>
|
wolffd@0
|
341 <li> jtree_dbn_inf_engine can now support soft evidence.
|
wolffd@0
|
342
|
wolffd@0
|
343 <li> Rewrote graph/dfs to make it clearer.
|
wolffd@0
|
344 Return arguments have changed, as has mk_rooted_tree.
|
wolffd@0
|
345 The acyclicity check for large undirected graphs can cause a stack overflow.
|
wolffd@0
|
346 It turns out that this was not a bug, but is because Matlab's stack depth
|
wolffd@0
|
347 bound is very low by default.
|
wolffd@0
|
348
|
wolffd@0
|
349 <li> Renamed examples/dynamic/filter2 to filter_test1, so it does not
|
wolffd@0
|
350 conflict with the filter2 function in the image processing toolbox.
|
wolffd@0
|
351
|
wolffd@0
|
352 <li> Ran test_BNT on various versions of matlab to check compatibility.
|
wolffd@0
|
353 On matlab 6.5 (r13), elapsed time = 211s, cpu time = 204s.
|
wolffd@0
|
354 On matlab 6.1 (r12), elapsed time = 173s, cpu time = 164s.
|
wolffd@0
|
355 On matlab 5.3 (r11), elapsed time = 116s, cpu time = 114s.
|
wolffd@0
|
356 So matlab is apparently getting slower with time!!
|
wolffd@0
|
357 (All results were with a linux PIII machine.)
|
wolffd@0
|
358 </ul>
|
wolffd@0
|
359
|
wolffd@0
|
360
|
wolffd@0
|
361 <li> 14 Nov 02
|
wolffd@0
|
362 <ul>
|
wolffd@0
|
363 <li> Removed all ndx inference routines, since they are only
|
wolffd@0
|
364 marginally faster on toy problems,
|
wolffd@0
|
365 and are slower on large problems due to having to store and lookup
|
wolffd@0
|
366 the indices (causes cache misses).
|
wolffd@0
|
367 In particular, I removed jtree_ndx_inf_eng and jtree_ndx_dbn_inf_eng, all the *ndx*
|
wolffd@0
|
368 routines from potentials/Tables, and all the UID stuff from
|
wolffd@0
|
369 add_BNT_to_path,
|
wolffd@0
|
370 thus simplifying the code.
|
wolffd@0
|
371 This required fixing hmm_(2TBN)_inf_engine/marginal_nodes\family,
|
wolffd@0
|
372 and updating installC.
|
wolffd@0
|
373
|
wolffd@0
|
374
|
wolffd@0
|
375 <li> Removed jtree_C_inf_engine and jtree_C_dbn_inf_engine.
|
wolffd@0
|
376 The former is basically the same as using jtree_inf_engine with
|
wolffd@0
|
377 mutliply_by_table.c and marginalize_table.c.
|
wolffd@0
|
378 The latter benefited slightly by assuming potentials were tables
|
wolffd@0
|
379 (arrays not objects), but these negligible savings don't justify the
|
wolffd@0
|
380 complexity and code duplication.
|
wolffd@0
|
381
|
wolffd@0
|
382 <li> Removed stab_cond_gauss_inf_engine and
|
wolffd@0
|
383 scg_unrolled_dbn_inf_engine,
|
wolffd@0
|
384 written by shan.huang@intel.com, since the code was buggy.
|
wolffd@0
|
385
|
wolffd@0
|
386 <li> Removed potential_engine, which was only experimental anyway.
|
wolffd@0
|
387
|
wolffd@0
|
388 </ul>
|
wolffd@0
|
389
|
wolffd@0
|
390
|
wolffd@0
|
391
|
wolffd@0
|
392 <li> 13 Nov 02
|
wolffd@0
|
393 <ul>
|
wolffd@0
|
394 <li> <b>Released version 5</b>.
|
wolffd@0
|
395 The previous version, released on 7/28/02, is available
|
wolffd@0
|
396 <a href="BNT4.zip">here</a>.
|
wolffd@0
|
397
|
wolffd@0
|
398 <li> Moved code and documentation to MIT.
|
wolffd@0
|
399
|
wolffd@0
|
400 <li> Added repmat.c from Thomas Minka's lightspeed library.
|
wolffd@0
|
401 Modified it so it can return an empty matrix.
|
wolffd@0
|
402
|
wolffd@0
|
403 <li> Tomas Kocka fixed bug in the BDeu option for tabular_CPD,
|
wolffd@0
|
404 and contributed graph/dag_to_eg, to convert to essential graphs.
|
wolffd@0
|
405
|
wolffd@0
|
406 <!--<li> Wrote a <a href="../Papers/fastmult.pdf">paper</a> which explains
|
wolffd@0
|
407 the ndx methods and the ndx cache BNT uses for fast
|
wolffd@0
|
408 multiplication/ marginalization of multi-dimensional arrays.
|
wolffd@0
|
409 -->
|
wolffd@0
|
410
|
wolffd@0
|
411 <li> Modified definition of hhmmQ_CPD, so that Qps can now accept
|
wolffd@0
|
412 parents in either the current or previous slice.
|
wolffd@0
|
413
|
wolffd@0
|
414 <li> Added hhmm2Q_CPD class, which is simpler than hhmmQ (no embedded
|
wolffd@0
|
415 sub CPDs, etc), and which allows the conditioning parents, Qps, to
|
wolffd@0
|
416 be before (in the topological ordering) the F or Q(t-1) nodes.
|
wolffd@0
|
417 See BNT/examples/dynamic/HHMM/Map/mk_map_hhmm for an example.
|
wolffd@0
|
418 </ul>
|
wolffd@0
|
419
|
wolffd@0
|
420
|
wolffd@0
|
421 <li> 7/28/02
|
wolffd@0
|
422 <ul>
|
wolffd@0
|
423 <li> Changed graph/best_first_elim_order from min-fill to min-weight.
|
wolffd@0
|
424 <li> Ernest Chan fixed bug in Kalman/sample_lds (G{i} becomes G{m} in
|
wolffd@0
|
425 line 61).
|
wolffd@0
|
426 <li> Tal Blum <bloom@cs.huji.ac.il> fixed bug in HMM/init_ghmm (Q
|
wolffd@0
|
427 becomes K, the number of states).
|
wolffd@0
|
428 <li> Fixed jtree_2tbn_inf_engine/set_fields so it correctly sets the
|
wolffd@0
|
429 maximize flag to 1 even in subengines.
|
wolffd@0
|
430 <li> Gary Bradksi did a simple mod to the PC struct learn alg so you can pass it an
|
wolffd@0
|
431 adjacency matrix as a constraint. Also, CovMat.m reads a file and
|
wolffd@0
|
432 produces a covariance matrix.
|
wolffd@0
|
433 <li> KNOWN BUG in CPDs/@hhmmQ_CPD/update_ess.m at line 72 caused by
|
wolffd@0
|
434 examples/dynamic/HHMM/Square/learn_square_hhmm_cts.m at line 57.
|
wolffd@0
|
435 <li>
|
wolffd@0
|
436 The old version is available from www.cs.berkeley.edu/~murphyk/BNT.24june02.zip
|
wolffd@0
|
437 </ul>
|
wolffd@0
|
438
|
wolffd@0
|
439
|
wolffd@0
|
440 <li> 6/24/02
|
wolffd@0
|
441 <ul>
|
wolffd@0
|
442 <li> Renamed dag_to_dot as graph_to_dot and added support for
|
wolffd@0
|
443 undirected graphs.
|
wolffd@0
|
444 <li> Changed syntax for HHMM CPD constructors: no need to specify d/D
|
wolffd@0
|
445 anymore,so they can be used for more complex models.
|
wolffd@0
|
446 <li> Removed redundant first argument to mk_isolated_tabular_CPD.
|
wolffd@0
|
447 </ul>
|
wolffd@0
|
448
|
wolffd@0
|
449
|
wolffd@0
|
450 <li> 6/19/02
|
wolffd@0
|
451 <ul>
|
wolffd@0
|
452 <li>
|
wolffd@0
|
453 Fixed most probable explanation code.
|
wolffd@0
|
454 Replaced calc_mpe with find_mpe, which is now a method of certain
|
wolffd@0
|
455 inference engines, e.g., jtree, belprop.
|
wolffd@0
|
456 calc_mpe_global has become the find_mpe method of global_joint.
|
wolffd@0
|
457 calc_mpe_bucket has become the find_mpe method of var_elim.
|
wolffd@0
|
458 calc_mpe_dbn has become the find_mpe method of smoother.
|
wolffd@0
|
459 These routines now correctly find the jointly most probable
|
wolffd@0
|
460 explanation, instead of the marginally most probable assignments.
|
wolffd@0
|
461 See examples/static/mpe1\mpe2 and examples/dynamic/viterbi1
|
wolffd@0
|
462 for examples.
|
wolffd@0
|
463 Removed maximize flag from constructor and enter_evidence
|
wolffd@0
|
464 methods, since this no longer needs to be specified by the user.
|
wolffd@0
|
465
|
wolffd@0
|
466 <li> Rainer Deventer fixed in a bug in
|
wolffd@0
|
467 CPDs/@gaussian_CPD/udpate_ess.m:
|
wolffd@0
|
468 now, hidden_cps = any(hidden_bitv(cps)), whereas it used to be
|
wolffd@0
|
469 hidden_cps = all(hidden_bitv(cps)).
|
wolffd@0
|
470
|
wolffd@0
|
471 </ul>
|
wolffd@0
|
472
|
wolffd@0
|
473
|
wolffd@0
|
474 <li> 5/29/02
|
wolffd@0
|
475 <ul>
|
wolffd@0
|
476 <li> CPDs/@gaussian_CPD/udpate_ess.m fixed WX,WXX,WXY (thanks to Rainer Deventer and
|
wolffd@0
|
477 Yohsuke Minowa for spotting the bug). Does the C version work??
|
wolffd@0
|
478 <li> potentials/@cpot/mpot_to_cpot fixed K==0 case (thanks to Rainer Deventer).
|
wolffd@0
|
479 <li> CPDs/@gaussian_CPD/log_prob_node now accepts non-cell array data
|
wolffd@0
|
480 on self (thanks to rishi <rishi@capsl.udel.edu> for catching this).
|
wolffd@0
|
481 </ul>
|
wolffd@0
|
482
|
wolffd@0
|
483
|
wolffd@0
|
484 <li> 5/19/02
|
wolffd@0
|
485 <ul>
|
wolffd@0
|
486
|
wolffd@0
|
487 <!--
|
wolffd@0
|
488 <li> Finally added <a href="../Papers/wei_ndx.ps.gz">paper</a> by Wei Hu (written
|
wolffd@0
|
489 November 2001)
|
wolffd@0
|
490 describing ndxB, ndxD, and ndxSD.
|
wolffd@0
|
491 -->
|
wolffd@0
|
492
|
wolffd@0
|
493 <li> Wei Hu made the following changes.
|
wolffd@0
|
494 <ul>
|
wolffd@0
|
495 <li> Memory leak repair:
|
wolffd@0
|
496 a. distribute_evidence.c in static/@jtree_C directory
|
wolffd@0
|
497 b. distribute_evidence.c in static/@jtree_ndx directory
|
wolffd@0
|
498 c. marg_tablec. in Tables dir
|
wolffd@0
|
499
|
wolffd@0
|
500 <li> Add "@jtree_ndx_2TBN_inf_engine" in inference/online dir
|
wolffd@0
|
501
|
wolffd@0
|
502 <li> Add "@jtree_sparse_inf_engine" in inference/static dir
|
wolffd@0
|
503
|
wolffd@0
|
504 <li> Add "@jtree_sparse_2TBN_inf_engine" in inference/online dir
|
wolffd@0
|
505
|
wolffd@0
|
506 <li> Modify "tabular_CPD.m" in CPDs/@tabular_CPD dir , used for sparse
|
wolffd@0
|
507
|
wolffd@0
|
508 <li> In "@discrete_CPD" dir:
|
wolffd@0
|
509 a. modify "convert_to_pot.m", used for sparse
|
wolffd@0
|
510 b. add "convert_to_sparse_table.c"
|
wolffd@0
|
511
|
wolffd@0
|
512 <li> In "potentials/@dpot" dir:
|
wolffd@0
|
513 a. remove "divide_by_pot.c" and "multiply_by_pot.c"
|
wolffd@0
|
514 b. add "divide_by_pot.m" and "multiply_by_pot.m"
|
wolffd@0
|
515 c. modify "dpot.m", "marginalize_pot.m" and "normalize_pot.m"
|
wolffd@0
|
516
|
wolffd@0
|
517 <li> In "potentials/Tables" dir:
|
wolffd@0
|
518 a. modify mk_ndxB.c;(for speedup)
|
wolffd@0
|
519 b. add "mult_by_table.m",
|
wolffd@0
|
520 "divide_by_table.m",
|
wolffd@0
|
521 "divide_by_table.c",
|
wolffd@0
|
522 "marg_sparse_table.c",
|
wolffd@0
|
523 "mult_by_sparse_table.c",
|
wolffd@0
|
524 "divide_by_sparse_table.c".
|
wolffd@0
|
525
|
wolffd@0
|
526 <li> Modify "normalise.c" in misc dir, used for sparse.
|
wolffd@0
|
527
|
wolffd@0
|
528 <li>And, add discrete2, discrete3, filter2 and filter3 as test applications in test_BNT.m
|
wolffd@0
|
529 Modify installC.m
|
wolffd@0
|
530 </ul>
|
wolffd@0
|
531
|
wolffd@0
|
532 <li> Kevin made the following changes related to strong junction
|
wolffd@0
|
533 trees:
|
wolffd@0
|
534 <ul>
|
wolffd@0
|
535 <li> jtree_inf_engin line 75:
|
wolffd@0
|
536 engine.root_clq = length(engine.cliques);
|
wolffd@0
|
537 the last clq is guaranteed to be a strong root
|
wolffd@0
|
538
|
wolffd@0
|
539 <li> dag_to_jtree line 38: [jtree, root, B, w] =
|
wolffd@0
|
540 cliques_to_jtree(cliques, ns);
|
wolffd@0
|
541 never call cliques_to_strong_jtree
|
wolffd@0
|
542
|
wolffd@0
|
543 <li> strong_elim_order: use Ilya's code instead of topological sorting.
|
wolffd@0
|
544 </ul>
|
wolffd@0
|
545
|
wolffd@0
|
546 <li> Kevin fixed CPDs/@generic_CPD/learn_params, so it always passes
|
wolffd@0
|
547 in the correct hidden_bitv field to update_params.
|
wolffd@0
|
548
|
wolffd@0
|
549 </ul>.
|
wolffd@0
|
550
|
wolffd@0
|
551
|
wolffd@0
|
552 <li> 5/8/02
|
wolffd@0
|
553 <ul>
|
wolffd@0
|
554
|
wolffd@0
|
555 <li> Jerod Weinman helped fix some bugs in HHMMQ_CPD/maximize_params.
|
wolffd@0
|
556
|
wolffd@0
|
557 <li> Removed broken online inference from hmm_inf_engine.
|
wolffd@0
|
558 It has been replaced filter_inf_engine, which can take hmm_inf_engine
|
wolffd@0
|
559 as an argument.
|
wolffd@0
|
560
|
wolffd@0
|
561 <li> Changed graph visualization function names.
|
wolffd@0
|
562 'draw_layout' is now 'draw_graph',
|
wolffd@0
|
563 'draw_layout_dbn' is now 'draw_dbn',
|
wolffd@0
|
564 'plotgraph' is now 'dag_to_dot',
|
wolffd@0
|
565 'plothmm' is now 'hmm_to_dot',
|
wolffd@0
|
566 added 'dbn_to_dot',
|
wolffd@0
|
567 'mkdot' no longer exists': its functioality has been subsumed by dag_to_dot.
|
wolffd@0
|
568 The dot functions now all take optional args in string/value format.
|
wolffd@0
|
569 </ul>
|
wolffd@0
|
570
|
wolffd@0
|
571
|
wolffd@0
|
572 <li> 4/1/02
|
wolffd@0
|
573 <ul>
|
wolffd@0
|
574 <li> Added online inference classes.
|
wolffd@0
|
575 See BNT/inference/online and BNT/examples/dynamic/filter1.
|
wolffd@0
|
576 This is work in progress.
|
wolffd@0
|
577 <li> Renamed cmp_inference to cmp_inference_dbn, and made its
|
wolffd@0
|
578 interface and behavior more similar to cmp_inference_static.
|
wolffd@0
|
579 <li> Added field rep_of_eclass to bnet and dbn, to simplify
|
wolffd@0
|
580 parameter tying (see ~murphyk/Bayes/param_tieing.html).
|
wolffd@0
|
581 <li> Added gmux_CPD (Gaussian mulitplexers).
|
wolffd@0
|
582 See BNT/examples/dynamic/SLAM/skf_data_assoc_gmux for an example.
|
wolffd@0
|
583 <li> Modified the forwards sampling routines.
|
wolffd@0
|
584 general/sample_dbn and sample_bnet now take optional arguments as
|
wolffd@0
|
585 strings, and can sample with pre-specified evidence.
|
wolffd@0
|
586 sample_bnet can only generate a single sample, and it is always a cell
|
wolffd@0
|
587 array.
|
wolffd@0
|
588 sample_node can only generate a single sample, and it is always a
|
wolffd@0
|
589 scalar or vector.
|
wolffd@0
|
590 This eliminates the false impression that the function was
|
wolffd@0
|
591 ever vectorized (which was only true for tabular_CPDs).
|
wolffd@0
|
592 (Calling sample_bnet inside a for-loop is unlikely to be a bottleneck.)
|
wolffd@0
|
593 <li> Updated usage.html's description of CPDs (gmux) and inference
|
wolffd@0
|
594 (added gibbs_sampling and modified the description of pearl).
|
wolffd@0
|
595 <li> Modified BNT/Kalman/kalman_filter\smoother so they now optionally
|
wolffd@0
|
596 take an observed input (control) sequence.
|
wolffd@0
|
597 Also, optional arguments are now passed as strings.
|
wolffd@0
|
598 <li> Removed BNT/examples/static/uci_data to save space.
|
wolffd@0
|
599 </ul>
|
wolffd@0
|
600
|
wolffd@0
|
601 <li> 3/14/02
|
wolffd@0
|
602 <ul>
|
wolffd@0
|
603 <li> pearl_inf_engine now works for (vector) Gaussian nodes, as well
|
wolffd@0
|
604 as discrete. compute_pi has been renamed CPD_to_pi. compute_lambda_msg
|
wolffd@0
|
605 has been renamed CPD_to_lambda_msg. These are now implemented for
|
wolffd@0
|
606 the discrete_CPD class instead of tabular_CPD. noisyor and
|
wolffd@0
|
607 Gaussian have their own private implemenations.
|
wolffd@0
|
608 Created examples/static/Belprop subdirectory.
|
wolffd@0
|
609 <li> Added examples/dynamic/HHMM/Motif.
|
wolffd@0
|
610 <li> Added Matt Brand's entropic prior code.
|
wolffd@0
|
611 <li> cmp_inference_static has changed. It no longer returns err. It
|
wolffd@0
|
612 can check for convergence. It can accept 'observed'.
|
wolffd@0
|
613 </ul>
|
wolffd@0
|
614
|
wolffd@0
|
615
|
wolffd@0
|
616 <li> 3/4/02
|
wolffd@0
|
617 <ul>
|
wolffd@0
|
618 <li> Fixed HHMM code. Now BNT/examples/dynamic/HHMM/mk_abcd_hhmm
|
wolffd@0
|
619 implements the example in the NIPS paper. See also
|
wolffd@0
|
620 Square/sample_square_hhmm_discrete and other files.
|
wolffd@0
|
621
|
wolffd@0
|
622 <li> Included Bhaskara Marthi's gibbs_sampling_inf_engine. Currently
|
wolffd@0
|
623 this only works if all CPDs are tabular and if you call installC.
|
wolffd@0
|
624
|
wolffd@0
|
625 <li> Modified Kalman/tracking_demo so it calls plotgauss2d instead of
|
wolffd@0
|
626 gaussplot.
|
wolffd@0
|
627
|
wolffd@0
|
628 <li> Included Sonia Leach's speedup of mk_rnd_dag.
|
wolffd@0
|
629 My version created all NchooseK subsets, and then picked among them. Sonia
|
wolffd@0
|
630 reorders the possible parents randomly and choose
|
wolffd@0
|
631 the first k. This saves on having to enumerate the large number of
|
wolffd@0
|
632 possible subsets before picking from one.
|
wolffd@0
|
633
|
wolffd@0
|
634 <li> Eliminated BNT/inference/static/Old, which contained some old
|
wolffd@0
|
635 .mexglx files which wasted space.
|
wolffd@0
|
636 </ul>
|
wolffd@0
|
637
|
wolffd@0
|
638
|
wolffd@0
|
639
|
wolffd@0
|
640 <li> 2/15/02
|
wolffd@0
|
641 <ul>
|
wolffd@0
|
642 <li> Removed the netlab directory, since most of it was not being
|
wolffd@0
|
643 used, and it took up too much space (the goal is to have BNT.zip be
|
wolffd@0
|
644 less than 1.4MB, so if fits on a floppy).
|
wolffd@0
|
645 The required files have been copied into netlab2.
|
wolffd@0
|
646 </ul>
|
wolffd@0
|
647
|
wolffd@0
|
648 <li> 2/14/02
|
wolffd@0
|
649 <ul>
|
wolffd@0
|
650 <li> Shan Huang fixed most (all?) of the bugs in his stable CG code.
|
wolffd@0
|
651 scg1-3 now work, but scg_3node and scg_unstable give different
|
wolffd@0
|
652 behavior than that reported in the Cowell book.
|
wolffd@0
|
653
|
wolffd@0
|
654 <li> I changed gaussplot so it plots an ellipse representing the
|
wolffd@0
|
655 eigenvectors of the covariance matrix, rather than numerically
|
wolffd@0
|
656 evaluating the density and using a contour plot; this
|
wolffd@0
|
657 is much faster and gives better pictures. The new function is
|
wolffd@0
|
658 called plotgauss2d in BNT/Graphics.
|
wolffd@0
|
659
|
wolffd@0
|
660 <li> Joni Alon <jalon@cs.bu.edu> fixed some small bugs:
|
wolffd@0
|
661 mk_dhmm_obs_lik called forwards with the wrong args, and
|
wolffd@0
|
662 add_BNT_to_path should quote filenames with spaces.
|
wolffd@0
|
663
|
wolffd@0
|
664 <li> I added BNT/stats2/myunidrnd which is called by learn_struct_mcmc.
|
wolffd@0
|
665
|
wolffd@0
|
666 <li> I changed BNT/potentials/@dpot/multiply_by_dpot so it now says
|
wolffd@0
|
667 Tbig.T(:) = Tbig.T(:) .* Ts(:);
|
wolffd@0
|
668 </ul>
|
wolffd@0
|
669
|
wolffd@0
|
670
|
wolffd@0
|
671 <li> 2/6/02
|
wolffd@0
|
672 <ul>
|
wolffd@0
|
673 <li> Added hierarchical HMMs. See BNT/examples/dynamic/HHMM and
|
wolffd@0
|
674 CPDs/@hhmmQ_CPD and @hhmmF_CPD.
|
wolffd@0
|
675 <li> sample_dbn can now sample until a certain condition is true.
|
wolffd@0
|
676 <li> Sonia Leach fixed learn_struct_mcmc and changed mk_nbrs_of_digraph
|
wolffd@0
|
677 so it only returns DAGs.
|
wolffd@0
|
678 Click <a href="sonia_mcmc.txt">here</a> for details of her changes.
|
wolffd@0
|
679 </ul>
|
wolffd@0
|
680
|
wolffd@0
|
681
|
wolffd@0
|
682 <li> 2/4/02
|
wolffd@0
|
683 <ul>
|
wolffd@0
|
684 <li> Wei Hu fixed a bug in
|
wolffd@0
|
685 jtree_ndx_inf_engine/collect\distribute_evidence.c which failed when
|
wolffd@0
|
686 maximize=1.
|
wolffd@0
|
687 <li>
|
wolffd@0
|
688 I fixed various bugs to do with conditional Gaussians,
|
wolffd@0
|
689 so mixexp3 now works (thansk to Gerry Fung <gerry.fung@utoronto.ca>
|
wolffd@0
|
690 for spotting the error). Specifically:
|
wolffd@0
|
691 Changed softmax_CPD/convert_to_pot so it now puts cts nodes in cdom, and no longer inherits
|
wolffd@0
|
692 this function from discrete_CPD.
|
wolffd@0
|
693 Changed root_CPD/convert_to_put so it puts self in cdom.
|
wolffd@0
|
694 </ul>
|
wolffd@0
|
695
|
wolffd@0
|
696
|
wolffd@0
|
697 <li> 1/31/02
|
wolffd@0
|
698 <ul>
|
wolffd@0
|
699 <li> Fixed log_lik_mhmm (thanks to ling chen <real_lingchen@yahoo.com>
|
wolffd@0
|
700 for spotting the typo)
|
wolffd@0
|
701 <li> Now many scripts in examples/static call cmp_inference_static.
|
wolffd@0
|
702 Also, SCG scripts have been simplified (but still don't work!).
|
wolffd@0
|
703 <li> belprop and belprop_fg enter_evidence now returns [engine, ll,
|
wolffd@0
|
704 niter], with ll=0, so the order of the arguments is compatible with other engines.
|
wolffd@0
|
705 <li> Ensured that all enter_evidence methods support optional
|
wolffd@0
|
706 arguments such as 'maximize', even if they ignore them.
|
wolffd@0
|
707 <li> Added Wei Hu's potentials/Tables/rep_mult.c, which is used to
|
wolffd@0
|
708 totally eliminate all repmats from gaussian_CPD/update_ess.
|
wolffd@0
|
709 </ul>
|
wolffd@0
|
710
|
wolffd@0
|
711
|
wolffd@0
|
712 <li> 1/30/02
|
wolffd@0
|
713 <ul>
|
wolffd@0
|
714 <li> update_ess now takes hidden_bitv instead of hidden_self and
|
wolffd@0
|
715 hidden_ps. This allows gaussian_CPD to distinguish hidden discrete and
|
wolffd@0
|
716 cts parents. Now learn_params_em, as well as learn_params_dbn_em,
|
wolffd@0
|
717 passes in this info, for speed.
|
wolffd@0
|
718
|
wolffd@0
|
719 <li> gaussian_CPD update_ess is now vectorized for any case where all
|
wolffd@0
|
720 the continuous nodes are observed (eg., Gaussian HMMs, AR-HMMs).
|
wolffd@0
|
721
|
wolffd@0
|
722 <li> mk_dbn now automatically detects autoregressive nodes.
|
wolffd@0
|
723
|
wolffd@0
|
724 <li> hmm_inf_engine now uses indexes in marginal_nodes/family for
|
wolffd@0
|
725 speed. Marginal_ndoes can now only handle single nodes.
|
wolffd@0
|
726 (SDndx is hard-coded, to avoid the overhead of using marg_ndx,
|
wolffd@0
|
727 which is slow because of the case and global statements.)
|
wolffd@0
|
728
|
wolffd@0
|
729 <li> add_ev_to_dmarginal now retains the domain field.
|
wolffd@0
|
730
|
wolffd@0
|
731 <li> Wei Hu wrote potentials/Tables/repmat_and_mult.c, which is used to
|
wolffd@0
|
732 avoid some of the repmat's in gaussian_CPD/update_ess.
|
wolffd@0
|
733
|
wolffd@0
|
734 <li> installC now longer sets the global USEC, since USEC is set to 0
|
wolffd@0
|
735 by add_BNT_to_path, even if the C files have already been compiled
|
wolffd@0
|
736 in a previous session. Instead, gaussian_CPD checks to
|
wolffd@0
|
737 see if repmat_and_mult exists, and (bat1, chmm1, water1, water2)
|
wolffd@0
|
738 check to see if jtree_C_inf_engine/collect_evidence exists.
|
wolffd@0
|
739 Note that checking if a file exists is slow, so we do the check
|
wolffd@0
|
740 inside the gaussian_CPD constructor, not inside update_ess.
|
wolffd@0
|
741
|
wolffd@0
|
742 <li> uninstallC now deletes both .mex and .dll files, just in case I
|
wolffd@0
|
743 accidently ship a .zip file with binaries. It also deletes mex
|
wolffd@0
|
744 files from jtree_C_inf_engine.
|
wolffd@0
|
745
|
wolffd@0
|
746 <li> Now marginal_family for both jtree_limid_inf_engine and
|
wolffd@0
|
747 global_joint_inf_engine returns a marginal structure and
|
wolffd@0
|
748 potential, as required by solve_limid.
|
wolffd@0
|
749 Other engines (eg. jtree_ndx, hmm) are not required to return a potential.
|
wolffd@0
|
750 </ul>
|
wolffd@0
|
751
|
wolffd@0
|
752
|
wolffd@0
|
753
|
wolffd@0
|
754 <li> 1/22/02
|
wolffd@0
|
755 <ul>
|
wolffd@0
|
756 <li> Added an optional argument to mk_bnet and mk_dbn which lets you
|
wolffd@0
|
757 add names to nodes. This uses the new assoc_array class.
|
wolffd@0
|
758
|
wolffd@0
|
759 <li> Added Yimin Zhang's (unfinished) classification/regression tree
|
wolffd@0
|
760 code to CPDs/tree_CPD.
|
wolffd@0
|
761
|
wolffd@0
|
762 </ul>
|
wolffd@0
|
763
|
wolffd@0
|
764
|
wolffd@0
|
765
|
wolffd@0
|
766 <li> 1/14/02
|
wolffd@0
|
767 <ul>
|
wolffd@0
|
768 <li> Incorporated some of Shan Huang's (still broken) stable CG code.
|
wolffd@0
|
769 </ul>
|
wolffd@0
|
770
|
wolffd@0
|
771
|
wolffd@0
|
772 <li> 1/9/02
|
wolffd@0
|
773 <ul>
|
wolffd@0
|
774 <li> Yimin Zhang vectorized @discrete_CPD/prob_node, which speeds up
|
wolffd@0
|
775 structure learning considerably. I fixed this to handle softmax CPDs.
|
wolffd@0
|
776
|
wolffd@0
|
777 <li> Shan Huang changed the stable conditional Gaussian code to handle
|
wolffd@0
|
778 vector-valued nodes, but it is buggy.
|
wolffd@0
|
779
|
wolffd@0
|
780 <li> I vectorized @gaussian_CPD/update_ess for a special case.
|
wolffd@0
|
781
|
wolffd@0
|
782 <li> Removed denom=min(1, ... Z) from gaussian_CPD/maximize_params
|
wolffd@0
|
783 (added to cope with negative temperature for entropic prior), which
|
wolffd@0
|
784 gives wrong results on mhmm1.
|
wolffd@0
|
785 </ul>
|
wolffd@0
|
786
|
wolffd@0
|
787
|
wolffd@0
|
788 <li> 1/7/02
|
wolffd@0
|
789
|
wolffd@0
|
790 <ul>
|
wolffd@0
|
791 <li> Removed the 'xo' typo from mk_qmr_bnet.
|
wolffd@0
|
792
|
wolffd@0
|
793 <li> convert_dbn_CPDs_to_tables has been vectorized; it is now
|
wolffd@0
|
794 substantially faster to compute the conditional likelihood for long sequences.
|
wolffd@0
|
795
|
wolffd@0
|
796 <li> Simplified constructors for tabular_CPD and gaussian_CPD, so they
|
wolffd@0
|
797 now both only take the form CPD(bnet, i, ...) for named arguments -
|
wolffd@0
|
798 the CPD('self', i, ...) format is gone. Modified mk_fgraph_given_ev
|
wolffd@0
|
799 to use mk_isolated_tabular_CPD instead.
|
wolffd@0
|
800
|
wolffd@0
|
801 <li> Added entropic prior to tabular and Gaussian nodes.
|
wolffd@0
|
802 For tabular_CPD, changed name of arguments to the constructor to
|
wolffd@0
|
803 distinguish Dirichlet and entropic priors. In particular,
|
wolffd@0
|
804 tabular_CPD(bnet, i, 'prior', 2) is now
|
wolffd@0
|
805 tabular_CPD(bnet, i, 'prior_type', 'dirichlet', 'dirichlet_weight', 2).
|
wolffd@0
|
806
|
wolffd@0
|
807 <li> Added deterministic annealing to learn_params_dbn_em for use with
|
wolffd@0
|
808 entropic priors. The old format learn(engine, cases, max_iter) has
|
wolffd@0
|
809 been replaced by learn(engine, cases, 'max_iter', max_iter).
|
wolffd@0
|
810
|
wolffd@0
|
811 <li> Changed examples/dynamic/bat1 and kjaerulff1, since default
|
wolffd@0
|
812 equivalence classes have changed from untied to tied.
|
wolffd@0
|
813 </ul>
|
wolffd@0
|
814
|
wolffd@0
|
815 <li> 12/30/01
|
wolffd@0
|
816 <ul>
|
wolffd@0
|
817 <li> DBN default equivalence classes for slice 2 has changed, so that
|
wolffd@0
|
818 now parameters are tied for nodes with 'equivalent' parents in slices
|
wolffd@0
|
819 1 and 2 (e.g., observed leaf nodes). This essentially makes passing in
|
wolffd@0
|
820 the eclass arguments redundant (hooray!).
|
wolffd@0
|
821 </ul>
|
wolffd@0
|
822
|
wolffd@0
|
823
|
wolffd@0
|
824 <li> 12/20/01
|
wolffd@0
|
825 <ul>
|
wolffd@0
|
826 <li> <b>Released version 4</b>.
|
wolffd@0
|
827 Version 4 is considered a major new release
|
wolffd@0
|
828 since it is not completely backwards compatible with V3.
|
wolffd@0
|
829 Observed nodes are now specified when the bnet/dbn is created,
|
wolffd@0
|
830 not when the engine is created. This changes the interface to many of
|
wolffd@0
|
831 the engines, making the code no longer backwards compatible.
|
wolffd@0
|
832 Hence support for non-named optional arguments (BNT2 style) has also
|
wolffd@0
|
833 been removed; hence mk_dbn etc. requires arguments to be passed by name.
|
wolffd@0
|
834
|
wolffd@0
|
835 <li> Ilya Shpitser's C code for triangulation now compiles under
|
wolffd@0
|
836 Windows as well as Unix, thanks to Wei Hu.
|
wolffd@0
|
837
|
wolffd@0
|
838 <li> All the ndx engines have been combined, and now take an optional
|
wolffd@0
|
839 argument specifying what kind of index to use.
|
wolffd@0
|
840
|
wolffd@0
|
841 <li> learn_params_dbn_em is now more efficient:
|
wolffd@0
|
842 @tabular_CPD/update_ess for nodes whose families
|
wolffd@0
|
843 are hidden does not need need to call add_evidence_to_dmarginal, which
|
wolffd@0
|
844 is slow.
|
wolffd@0
|
845
|
wolffd@0
|
846 <li> Wei Hu fixed bug in jtree_ndxD, so now the matlab and C versions
|
wolffd@0
|
847 both work.
|
wolffd@0
|
848
|
wolffd@0
|
849 <li> dhmm_inf_engine replaces hmm_inf_engine, since the former can
|
wolffd@0
|
850 handle any kind of topology and is slightly more efficient. dhmm is
|
wolffd@0
|
851 extended to handle Gaussian, as well as discrete,
|
wolffd@0
|
852 observed nodes. The new hmm_inf_engine no longer supports online
|
wolffd@0
|
853 inference (which was broken anyway).
|
wolffd@0
|
854
|
wolffd@0
|
855 <li> Added autoregressive HMM special case to hmm_inf_engine for
|
wolffd@0
|
856 speed.
|
wolffd@0
|
857
|
wolffd@0
|
858 <li> jtree_ndxSD_dbn_inf_engine now computes likelihood of the
|
wolffd@0
|
859 evidence in a vectorized manner, where possible, just like
|
wolffd@0
|
860 hmm_inf_engine.
|
wolffd@0
|
861
|
wolffd@0
|
862 <li> Added mk_limid, and hence simplified mk_bnet and mk_dbn.
|
wolffd@0
|
863
|
wolffd@0
|
864
|
wolffd@0
|
865 <li> Gaussian_CPD now uses 0.01*I prior on covariance matrix by
|
wolffd@0
|
866 default. To do ML estimation, set 'cov_prior_weight' to 0.
|
wolffd@0
|
867
|
wolffd@0
|
868 <li> Gaussian_CPD and tabular_CPD
|
wolffd@0
|
869 optional binary arguments are now set using 0/1 rather no 'no'/'yes'.
|
wolffd@0
|
870
|
wolffd@0
|
871 <li> Removed Shan Huang's PDAG and decomposable graph code, which will
|
wolffd@0
|
872 be put in a separate structure learning library.
|
wolffd@0
|
873 </ul>
|
wolffd@0
|
874
|
wolffd@0
|
875
|
wolffd@0
|
876 <li> 12/11/01
|
wolffd@0
|
877 <ul>
|
wolffd@0
|
878 <li> Wei Hu fixed jtree_ndx*_dbn_inf_engine and marg_table.c.
|
wolffd@0
|
879
|
wolffd@0
|
880 <li> Shan Huang contributed his implementation of stable conditional
|
wolffd@0
|
881 Gaussian code (Lauritzen 1999), and methods to search through the
|
wolffd@0
|
882 space of PDAGs (Markov equivalent DAGs) and undirected decomposable
|
wolffd@0
|
883 graphs. The latter is still under development.
|
wolffd@0
|
884 </ul>
|
wolffd@0
|
885
|
wolffd@0
|
886
|
wolffd@0
|
887 <li> 12/10/01
|
wolffd@0
|
888 <ul>
|
wolffd@0
|
889 <li> Included Wei Hu's new versions of the ndx* routines, which use
|
wolffd@0
|
890 integers instead of doubles. The new versions are about 5 times faster
|
wolffd@0
|
891 in C. In general, ndxSD is the best choice.
|
wolffd@0
|
892
|
wolffd@0
|
893 <li> Fixed misc/add_ev_to_dmarginal so it works with the ndx routines
|
wolffd@0
|
894 in bat1.
|
wolffd@0
|
895
|
wolffd@0
|
896 <li> Added calc_mpe_dbn to do Viterbi parsing.
|
wolffd@0
|
897
|
wolffd@0
|
898 <li> Updated dhmm_inf_engine so it computes marginals.
|
wolffd@0
|
899 </ul>
|
wolffd@0
|
900
|
wolffd@0
|
901
|
wolffd@0
|
902
|
wolffd@0
|
903 <li> 11/23/01
|
wolffd@0
|
904 <ul>
|
wolffd@0
|
905 <li> learn_params now does MAP estimation (i.e., uses Dirichlet prior,
|
wolffd@0
|
906 if define). Thanks to Simon Keizer skeizer@cs.utwente.nl for spotting
|
wolffd@0
|
907 this.
|
wolffd@0
|
908 <li> Changed plotgraph so it calls ghostview with the output of dotty,
|
wolffd@0
|
909 instead of converting from .ps to .tif. The resulting image is much
|
wolffd@0
|
910 easier to read.
|
wolffd@0
|
911 <li> Fixed cgpot/multiply_by_pots.m.
|
wolffd@0
|
912 <li> Wei Hu fixed ind2subv.c.
|
wolffd@0
|
913 <li> Changed arguments to compute_joint_pot.
|
wolffd@0
|
914 </ul>
|
wolffd@0
|
915
|
wolffd@0
|
916
|
wolffd@0
|
917 <li> 11/1/01
|
wolffd@0
|
918 <ul>
|
wolffd@0
|
919 <li> Changed sparse to dense in @dpot/multiply_pots, because sparse
|
wolffd@0
|
920 arrays apparently cause a bug in the NT version of Matlab.
|
wolffd@0
|
921
|
wolffd@0
|
922 <li> Fixed the bug in gaussian_CPD/log_prob_node.m which
|
wolffd@0
|
923 incorrectly called the vectorized gaussian_prob with different means
|
wolffd@0
|
924 when there were continuous parents and more than one case.
|
wolffd@0
|
925 (Thanks to Dave Andre for finding this.)
|
wolffd@0
|
926
|
wolffd@0
|
927 <li> Fixed the bug in root_CPD/convert_to_pot which did not check for
|
wolffd@0
|
928 pot_type='g'.
|
wolffd@0
|
929 (Thanks to Dave Andre for finding this.)
|
wolffd@0
|
930
|
wolffd@0
|
931 <li> Changed calc_mpe and calc_mpe_global so they now return a cell array.
|
wolffd@0
|
932
|
wolffd@0
|
933 <li> Combine pearl and loopy_pearl into a single inference engine
|
wolffd@0
|
934 called 'pearl_inf_engine', which now takes optional arguments passed
|
wolffd@0
|
935 in using the name/value pair syntax.
|
wolffd@0
|
936 marginal_nodes/family now takes the optional add_ev argument (same as
|
wolffd@0
|
937 jtree), which is the opposite of the previous shrink argument.
|
wolffd@0
|
938
|
wolffd@0
|
939 <li> Created pearl_unrolled_dbn_inf_engine and "resurrected"
|
wolffd@0
|
940 pearl_dbn_inf_engine in a simplified (but still broken!) form.
|
wolffd@0
|
941
|
wolffd@0
|
942 <li> Wei Hi fixed the bug in ind2subv.c, so now ndxSD works.
|
wolffd@0
|
943 He also made C versions of ndxSD and ndxB, and added (the unfinished) ndxD.
|
wolffd@0
|
944
|
wolffd@0
|
945 </ul>
|
wolffd@0
|
946
|
wolffd@0
|
947
|
wolffd@0
|
948 <li> 10/20/01
|
wolffd@0
|
949
|
wolffd@0
|
950 <ul>
|
wolffd@0
|
951 <li> Removed the use_ndx option from jtree_inf,
|
wolffd@0
|
952 and created 2 new inference engines: jtree_ndxSD_inf_engine and
|
wolffd@0
|
953 jtree_ndxB_inf_engine.
|
wolffd@0
|
954 The former stores 2 sets of indices for the small and difference
|
wolffd@0
|
955 domains; the latter stores 1 set of indices for the big domain.
|
wolffd@0
|
956 In Matlab, the ndxB version is often significantly faster than ndxSD
|
wolffd@0
|
957 and regular jree, except when the clique size is large.
|
wolffd@0
|
958 When compiled to C, the difference between ndxB and ndxSD (in terms of
|
wolffd@0
|
959 speed) vanishes; again, both are faster than compiled jtree, except
|
wolffd@0
|
960 when the clique size is large.
|
wolffd@0
|
961 Note: ndxSD currently has a bug in it, so it gives the wrong results!
|
wolffd@0
|
962 (The DBN analogs are jtree_dbn_ndxSD_inf_engine and
|
wolffd@0
|
963 jtree_dbn_ndxB_inf_engine.)
|
wolffd@0
|
964
|
wolffd@0
|
965 <li> Removed duplicate files from the HMM and Kalman subdirectories.
|
wolffd@0
|
966 e.g., normalise is now only in BNT/misc, so when compiled to C, it
|
wolffd@0
|
967 masks the unique copy of the Matlab version.
|
wolffd@0
|
968 </ul>
|
wolffd@0
|
969
|
wolffd@0
|
970
|
wolffd@0
|
971
|
wolffd@0
|
972 <li> 10/17/01
|
wolffd@0
|
973 <ul>
|
wolffd@0
|
974 <li> Fixed bugs introduced on 10/15:
|
wolffd@0
|
975 Renamed extract_gaussian_CPD_params_given_ev_on_dps.m to
|
wolffd@0
|
976 gaussian_CPD_params_given_dps.m since Matlab can't cope with such long
|
wolffd@0
|
977 names (this caused cg1 to fail). Fixed bug in
|
wolffd@0
|
978 gaussian_CPD/convert_to_pot, which now calls convert_to_table in the
|
wolffd@0
|
979 discrete case.
|
wolffd@0
|
980
|
wolffd@0
|
981 <li> Fixed bug in bk_inf_engine/marginal_nodes.
|
wolffd@0
|
982 The test 'if nodes < ss' is now
|
wolffd@0
|
983 'if nodes <= ss' (bug fix due to Stephen seg_ma@hotmail.com)
|
wolffd@0
|
984
|
wolffd@0
|
985 <li> Simplified uninstallC.
|
wolffd@0
|
986 </ul>
|
wolffd@0
|
987
|
wolffd@0
|
988
|
wolffd@0
|
989 <li> 10/15/01
|
wolffd@0
|
990 <ul>
|
wolffd@0
|
991
|
wolffd@0
|
992 <li> Added use_ndx option to jtree_inf and jtree_dbn_inf.
|
wolffd@0
|
993 This pre-computes indices for multiplying, dividing and marginalizing
|
wolffd@0
|
994 discrete potentials.
|
wolffd@0
|
995 This is like the old jtree_fast_inf_engine, but we use an extra level
|
wolffd@0
|
996 of indirection to reduce the number of indices needed (see
|
wolffd@0
|
997 uid_generator object).
|
wolffd@0
|
998 Sometimes this is faster than the original way...
|
wolffd@0
|
999 This is work in progress.
|
wolffd@0
|
1000
|
wolffd@0
|
1001 <li> The constructor for dpot no longer calls myreshape, which is very
|
wolffd@0
|
1002 slow.
|
wolffd@0
|
1003 But new dpots still must call myones.
|
wolffd@0
|
1004 Hence discrete potentials are only sometimes 1D vectors (but should
|
wolffd@0
|
1005 always be thought of as multi-D arrays). This is work in progress.
|
wolffd@0
|
1006 </ul>
|
wolffd@0
|
1007
|
wolffd@0
|
1008
|
wolffd@0
|
1009 <li> 10/6/01
|
wolffd@0
|
1010 <ul>
|
wolffd@0
|
1011 <li> Fixed jtree_dbn_inf_engine, and added kjaerulff1 to test this.
|
wolffd@0
|
1012 <li> Added option to jtree_inf_engine/marginal_nodes to return "full
|
wolffd@0
|
1013 sized" marginals, even on observed nodes.
|
wolffd@0
|
1014 <li> Clustered BK in examples/dynamic/bat1 seems to be broken,
|
wolffd@0
|
1015 so it has been commented out.
|
wolffd@0
|
1016 BK will be re-implemented on top of jtree_dbn, which should much more
|
wolffd@0
|
1017 efficient.
|
wolffd@0
|
1018 </ul>
|
wolffd@0
|
1019
|
wolffd@0
|
1020 <li> 9/25/01
|
wolffd@0
|
1021 <ul>
|
wolffd@0
|
1022 <li> jtree_dbn_inf_engine is now more efficient than calling BK with
|
wolffd@0
|
1023 clusters = exact, since it only uses the interface nodes, instead of
|
wolffd@0
|
1024 all of them, to maintain the belief state.
|
wolffd@0
|
1025 <li> Uninstalled the broken C version of strong_elim_order.
|
wolffd@0
|
1026 <li> Changed order of arguments to unroll_dbn_topology, so that intra1
|
wolffd@0
|
1027 is no longer required.
|
wolffd@0
|
1028 <li> Eliminated jtree_onepass, which can be simulated by calling
|
wolffd@0
|
1029 collect_evidence on jtree.
|
wolffd@0
|
1030 <li> online1 is no longer in the test_BNT suite, since there is some
|
wolffd@0
|
1031 problem with online prediction with mixtures of Gaussians using BK.
|
wolffd@0
|
1032 This functionality is no longer supported, since doing it properly is
|
wolffd@0
|
1033 too much work.
|
wolffd@0
|
1034 </ul>
|
wolffd@0
|
1035 </li>
|
wolffd@0
|
1036
|
wolffd@0
|
1037 <li> 9/7/01
|
wolffd@0
|
1038 <ul>
|
wolffd@0
|
1039 <li> Added Ilya Shpitser's C triangulation code (43x faster!).
|
wolffd@0
|
1040 Currently this only compiles under linux; windows support is being added.
|
wolffd@0
|
1041 </ul>
|
wolffd@0
|
1042
|
wolffd@0
|
1043
|
wolffd@0
|
1044 <li> 9/5/01
|
wolffd@0
|
1045 <ul>
|
wolffd@0
|
1046 <li> Fixed typo in CPDs/@tabular_kernel/convert_to_table (thanks,
|
wolffd@0
|
1047 Philippe!)
|
wolffd@0
|
1048 <li> Fixed problems with clamping nodes in tabular_CPD, learn_params,
|
wolffd@0
|
1049 learn_params_tabular, and bayes_update_params. See
|
wolffd@0
|
1050 BNT/examples/static/learn1 for a demo.
|
wolffd@0
|
1051 </ul>
|
wolffd@0
|
1052
|
wolffd@0
|
1053
|
wolffd@0
|
1054 <li> 9/3/01
|
wolffd@0
|
1055 <ul>
|
wolffd@0
|
1056 <li> Fixed typo on line 87 of gaussian_CPD which caused error in cg1.m
|
wolffd@0
|
1057 <li> Installed Wei Hu's latest version of jtree_C_inf_engine, which
|
wolffd@0
|
1058 can now compute marginals on any clique/cluster.
|
wolffd@0
|
1059 <li> Added Yair Weiss's code to compute the Bethe free energy
|
wolffd@0
|
1060 approximation to the log likelihood in loopy_pearl (still need to add
|
wolffd@0
|
1061 this to belprop). The return arguments are now: engine, loglik and
|
wolffd@0
|
1062 niter, which is different than before.
|
wolffd@0
|
1063 </ul>
|
wolffd@0
|
1064
|
wolffd@0
|
1065
|
wolffd@0
|
1066
|
wolffd@0
|
1067 <li> 8/30/01
|
wolffd@0
|
1068 <ul>
|
wolffd@0
|
1069 <li> Fixed bug in BNT/examples/static/id1 which passed hard-coded
|
wolffd@0
|
1070 directory name to belprop_inf_engine.
|
wolffd@0
|
1071
|
wolffd@0
|
1072 <li> Changed tabular_CPD and gaussian_CPD so they can now be created
|
wolffd@0
|
1073 without having to pass in a bnet.
|
wolffd@0
|
1074
|
wolffd@0
|
1075 <li> Finished mk_fgraph_given_ev. See the fg* files in examples/static
|
wolffd@0
|
1076 for demos of factor graphs (work in progress).
|
wolffd@0
|
1077 </ul>
|
wolffd@0
|
1078
|
wolffd@0
|
1079
|
wolffd@0
|
1080
|
wolffd@0
|
1081 <li> 8/22/01
|
wolffd@0
|
1082 <ul>
|
wolffd@0
|
1083
|
wolffd@0
|
1084 <li> Removed jtree_compiled_inf_engine,
|
wolffd@0
|
1085 since the C code it generated was so big that it would barf on large
|
wolffd@0
|
1086 models.
|
wolffd@0
|
1087
|
wolffd@0
|
1088 <li> Tidied up the potentials/Tables directory.
|
wolffd@0
|
1089 Removed mk_marg/mult_ndx.c,
|
wolffd@0
|
1090 which have been superceded by the much faster mk_marg/mult_index.c
|
wolffd@0
|
1091 (written by Wei Hu).
|
wolffd@0
|
1092 Renamed the Matlab versions mk_marginalise/multiply_table_ndx.m
|
wolffd@0
|
1093 to be mk_marg/mult_index.m to be compatible with the C versions.
|
wolffd@0
|
1094 Note: nobody calls these routines anymore!
|
wolffd@0
|
1095 (jtree_C_inf_engine/enter_softev.c has them built-in.)
|
wolffd@0
|
1096 Removed mk_ndx.c, which was only used by jtree_compiled.
|
wolffd@0
|
1097 Removed mk_cluster_clq_ndx.m, mk_CPD_clq_ndx, and marginalise_table.m
|
wolffd@0
|
1098 which were not used.
|
wolffd@0
|
1099 Moved shrink_obs_dims_in_table.m to misc.
|
wolffd@0
|
1100
|
wolffd@0
|
1101 <li> In potentials/@dpot directory: removed multiply_by_pot_C_old.c.
|
wolffd@0
|
1102 Now marginalize_pot.c can handle maximization,
|
wolffd@0
|
1103 and divide_by_pot.c has been implmented.
|
wolffd@0
|
1104 marginalize/multiply/divide_by_pot.m no longer have useC or genops options.
|
wolffd@0
|
1105 (To get the C versions, use installC.m)
|
wolffd@0
|
1106
|
wolffd@0
|
1107 <li> Removed useC and genops options from jtree_inf_engine.m
|
wolffd@0
|
1108 To use the C versions, install the C code.
|
wolffd@0
|
1109
|
wolffd@0
|
1110 <li> Updated BNT/installC.m.
|
wolffd@0
|
1111
|
wolffd@0
|
1112 <li> Added fclose to @loopy_pearl_inf/enter_evidence.
|
wolffd@0
|
1113
|
wolffd@0
|
1114 <li> Changes to MPE routines in BNT/general.
|
wolffd@0
|
1115 The maximize parameter is now specified inside enter_evidence
|
wolffd@0
|
1116 instead of when the engine is created.
|
wolffd@0
|
1117 Renamed calc_mpe_given_inf_engine to just calc_mpe.
|
wolffd@0
|
1118 Added Ron Zohar's optional fix to handle the case of ties.
|
wolffd@0
|
1119 Now returns log-likelihood instead of likelihood.
|
wolffd@0
|
1120 Added calc_mpe_global.
|
wolffd@0
|
1121 Removed references to genops in calc_mpe_bucket.m
|
wolffd@0
|
1122 Test file is now called mpe1.m
|
wolffd@0
|
1123
|
wolffd@0
|
1124 <li> For DBN inference, filter argument is now passed by name,
|
wolffd@0
|
1125 as is maximize. This is NOT BACKWARDS COMPATIBLE.
|
wolffd@0
|
1126
|
wolffd@0
|
1127 <li> Removed @loopy_dbn_inf_engine, which will was too complicated.
|
wolffd@0
|
1128 In the future, a new version, which applies static loopy to the
|
wolffd@0
|
1129 unrolled DBN, will be provided.
|
wolffd@0
|
1130
|
wolffd@0
|
1131 <li> discrete_CPD class now contains the family sizes and supports the
|
wolffd@0
|
1132 method dom_sizes. This is because it could not access the child field
|
wolffd@0
|
1133 CPD.sizes, and mysize(CPT) may give the wrong answer.
|
wolffd@0
|
1134
|
wolffd@0
|
1135 <li> Removed all functions of the form CPD_to_xxx, where xxx = dpot, cpot,
|
wolffd@0
|
1136 cgpot, table, tables. These have been replaced by convert_to_pot,
|
wolffd@0
|
1137 which takes a pot_type argument.
|
wolffd@0
|
1138 @discrete_CPD calls convert_to_table to implement a default
|
wolffd@0
|
1139 convert_to_pot.
|
wolffd@0
|
1140 @discrete_CPD calls CPD_to_CPT to implement a default
|
wolffd@0
|
1141 convert_to_table.
|
wolffd@0
|
1142 The convert_to_xxx routines take fewer arguments (no need to pass in
|
wolffd@0
|
1143 the globals node_sizes and cnodes!).
|
wolffd@0
|
1144 Eventually, convert_to_xxx will be vectorized, so it will operate on
|
wolffd@0
|
1145 all nodes in the same equivalence class "simultaneously", which should
|
wolffd@0
|
1146 be significantly quicker, at least for Gaussians.
|
wolffd@0
|
1147
|
wolffd@0
|
1148 <li> Changed discrete_CPD/sample_node and prob_node to use
|
wolffd@0
|
1149 convert_to_table, instead of CPD_to_CPT, so mlp/softmax nodes can
|
wolffd@0
|
1150 benefit.
|
wolffd@0
|
1151
|
wolffd@0
|
1152 <li> Removed @tabular_CPD/compute_lambda_msg_fast and
|
wolffd@0
|
1153 private/prod_CPD_and_pi_msgs_fast, since no one called them.
|
wolffd@0
|
1154
|
wolffd@0
|
1155 <li> Renamed compute_MLE to learn_params,
|
wolffd@0
|
1156 by analogy with bayes_update_params (also because it may compute an
|
wolffd@0
|
1157 MAP estimate).
|
wolffd@0
|
1158
|
wolffd@0
|
1159 <li> Renamed set_params to set_fields
|
wolffd@0
|
1160 and get_params to get_field for CPD and dpot objects, to
|
wolffd@0
|
1161 avoid confusion with the parameters of the CPD.
|
wolffd@0
|
1162
|
wolffd@0
|
1163 <li> Removed inference/doc, which has been superceded
|
wolffd@0
|
1164 by the web page.
|
wolffd@0
|
1165
|
wolffd@0
|
1166 <li> Removed inference/static/@stab_cond_gauss_inf_engine, which is
|
wolffd@0
|
1167 broken, and all references to stable CG.
|
wolffd@0
|
1168
|
wolffd@0
|
1169 </ul>
|
wolffd@0
|
1170
|
wolffd@0
|
1171
|
wolffd@0
|
1172
|
wolffd@0
|
1173
|
wolffd@0
|
1174
|
wolffd@0
|
1175 <li> 8/12/01
|
wolffd@0
|
1176 <ul>
|
wolffd@0
|
1177 <li> I removed potentials/@dpot/marginalize_pot_max.
|
wolffd@0
|
1178 Now marginalize_pot for all potential classes take an optional third
|
wolffd@0
|
1179 argument, specifying whether to sum out or max out.
|
wolffd@0
|
1180 The dpot class also takes in optional arguments specifying whether to
|
wolffd@0
|
1181 use C or genops (the global variable USE_GENOPS has been eliminated).
|
wolffd@0
|
1182
|
wolffd@0
|
1183 <li> potentials/@dpot/marginalize_pot has been simplified by assuming
|
wolffd@0
|
1184 that 'onto' is always in ascending order (i.e., we remove
|
wolffd@0
|
1185 Maynard-Reid's patch). This is to keep the code identical to the C
|
wolffd@0
|
1186 version and the other class implementations.
|
wolffd@0
|
1187
|
wolffd@0
|
1188 <li> Added Ron Zohar's general/calc_mpe_bucket function,
|
wolffd@0
|
1189 and my general/calc_mpe_given_inf_engine, for calculating the most
|
wolffd@0
|
1190 probable explanation.
|
wolffd@0
|
1191
|
wolffd@0
|
1192
|
wolffd@0
|
1193 <li> Added Wei Hu's jtree_C_inf_engine.
|
wolffd@0
|
1194 enter_softev.c is about 2 times faster than enter_soft_evidence.m.
|
wolffd@0
|
1195
|
wolffd@0
|
1196 <li> Added the latest version of jtree_compiled_inf_engine by Wei Hu.
|
wolffd@0
|
1197 The 'C' ndx_method now calls potentials/Tables/mk_marg/mult_index,
|
wolffd@0
|
1198 and the 'oldC' ndx_method calls potentials/Tables/mk_marg/mult_ndx.
|
wolffd@0
|
1199
|
wolffd@0
|
1200 <li> Added potentials/@dpot/marginalize_pot_C.c and
|
wolffd@0
|
1201 multiply_by_pot_C.c by Wei Hu.
|
wolffd@0
|
1202 These can be called by setting the 'useC' argument in
|
wolffd@0
|
1203 jtree_inf_engine.
|
wolffd@0
|
1204
|
wolffd@0
|
1205 <li> Added BNT/installC.m to compile all the mex files.
|
wolffd@0
|
1206
|
wolffd@0
|
1207 <li> Renamed prob_fully_instantiated_bnet to log_lik_complete.
|
wolffd@0
|
1208
|
wolffd@0
|
1209 <li> Added Shan Huang's unfinished stable conditional Gaussian
|
wolffd@0
|
1210 inference routines.
|
wolffd@0
|
1211 </ul>
|
wolffd@0
|
1212
|
wolffd@0
|
1213
|
wolffd@0
|
1214
|
wolffd@0
|
1215 <li> 7/13/01
|
wolffd@0
|
1216 <ul>
|
wolffd@0
|
1217 <li> Added the latest version of jtree_compiled_inf_engine by Wei Hu.
|
wolffd@0
|
1218 <li> Added the genops class by Doug Schwarz (see
|
wolffd@0
|
1219 BNT/genopsfun/README). This provides a 1-2x speed-up of
|
wolffd@0
|
1220 potentials/@dpot/multiply_by_pot and divide_by_pot.
|
wolffd@0
|
1221 <li> The function BNT/examples/static/qmr_compiled compares the
|
wolffd@0
|
1222 performance gains of these new functions.
|
wolffd@0
|
1223 </ul>
|
wolffd@0
|
1224
|
wolffd@0
|
1225 <li> 7/6/01
|
wolffd@0
|
1226 <ul>
|
wolffd@0
|
1227 <li> Made bk_inf_engine use the name/value argument syntax. This can
|
wolffd@0
|
1228 now do max-product (Viterbi) as well as sum-product
|
wolffd@0
|
1229 (forward-backward).
|
wolffd@0
|
1230 <li> Changed examples/static/mfa1 to use the new name/value argument
|
wolffd@0
|
1231 syntax.
|
wolffd@0
|
1232 </ul>
|
wolffd@0
|
1233
|
wolffd@0
|
1234
|
wolffd@0
|
1235 <li> 6/28/01
|
wolffd@0
|
1236
|
wolffd@0
|
1237 <ul>
|
wolffd@0
|
1238
|
wolffd@0
|
1239 <li> <b>Released version 3</b>.
|
wolffd@0
|
1240 Version 3 is considered a major new release
|
wolffd@0
|
1241 since it is not completely backwards compatible with V2.
|
wolffd@0
|
1242 V3 supports decision and utility nodes, loopy belief propagation on
|
wolffd@0
|
1243 general graphs (including undirected), structure learning for non-tabular nodes,
|
wolffd@0
|
1244 a simplified way of handling optional
|
wolffd@0
|
1245 arguments to functions,
|
wolffd@0
|
1246 and many other features which are described below.
|
wolffd@0
|
1247 In addition, the documentation has been substantially rewritten.
|
wolffd@0
|
1248
|
wolffd@0
|
1249 <li> The following functions can now take optional arguments specified
|
wolffd@0
|
1250 as name/value pairs, instead of passing arguments in a fixed order:
|
wolffd@0
|
1251 mk_bnet, jtree_inf_engine, tabular_CPD, gaussian_CPD, softmax_CPD, mlp_CPD,
|
wolffd@0
|
1252 enter_evidence.
|
wolffd@0
|
1253 This is very helpful if you want to use default values for most parameters.
|
wolffd@0
|
1254 The functions remain backwards compatible with BNT2.
|
wolffd@0
|
1255
|
wolffd@0
|
1256 <li> dsoftmax_CPD has been renamed softmax_CPD, replacing the older
|
wolffd@0
|
1257 version of softmax. The directory netlab2 has been updated, and
|
wolffd@0
|
1258 contains weighted versions of some of the learning routines in netlab.
|
wolffd@0
|
1259 (This code is still being developed by P. Brutti.)
|
wolffd@0
|
1260
|
wolffd@0
|
1261 <li> The "fast" versions of the inference engines, which generated
|
wolffd@0
|
1262 matlab code, have been removed.
|
wolffd@0
|
1263 @jtree_compiled_inf_engine now generates C code.
|
wolffd@0
|
1264 (This feature is currently being developed by Wei Hu of Intel (China),
|
wolffd@0
|
1265 and is not yet ready for public use.)
|
wolffd@0
|
1266
|
wolffd@0
|
1267 <li> CPD_to_dpot, CPD_to_cpot, CPD_to_cgpot and CPD_to_upot
|
wolffd@0
|
1268 are in the process of being replaced by convert_to_pot.
|
wolffd@0
|
1269
|
wolffd@0
|
1270 <li> determine_pot_type now takes as arguments (bnet, onodes)
|
wolffd@0
|
1271 instead of (onodes, cnodes, dag),
|
wolffd@0
|
1272 so it can detect the presence of utility nodes as well as continuous
|
wolffd@0
|
1273 nodes.
|
wolffd@0
|
1274 Hence this function is not backwards compatible with BNT2.
|
wolffd@0
|
1275
|
wolffd@0
|
1276 <li> The structure learning code (K2, mcmc) now works with any node
|
wolffd@0
|
1277 type, not just tabular.
|
wolffd@0
|
1278 mk_bnets_tabular has been eliminated.
|
wolffd@0
|
1279 bic_score_family and dirichlet_score_family will be replaced by score_family.
|
wolffd@0
|
1280 Note: learn_struct_mcmc has a new interface that is not backwards
|
wolffd@0
|
1281 compatible with BNT2.
|
wolffd@0
|
1282
|
wolffd@0
|
1283 <li> update_params_complete has been renamed bayes_update_params.
|
wolffd@0
|
1284 Also, learn_params_tabular has been replaced by learn_params, which
|
wolffd@0
|
1285 works for any CPD type.
|
wolffd@0
|
1286
|
wolffd@0
|
1287 <li> Added decision/utility nodes.
|
wolffd@0
|
1288 </ul>
|
wolffd@0
|
1289
|
wolffd@0
|
1290
|
wolffd@0
|
1291 <li> 6/6/01
|
wolffd@0
|
1292 <ul>
|
wolffd@0
|
1293 <li> Added soft evidence to jtree_inf_engine.
|
wolffd@0
|
1294 <li> Changed the documentation slightly (added soft evidence and
|
wolffd@0
|
1295 parameter tying, and separated parameter and structure learning).
|
wolffd@0
|
1296 <li> Changed the parameters of determine_pot_type, so it no longer
|
wolffd@0
|
1297 needs to be passed a DAG argument.
|
wolffd@0
|
1298 <li> Fixed parameter tying in mk_bnet (num. CPDs now equals num. equiv
|
wolffd@0
|
1299 classes).
|
wolffd@0
|
1300 <li> Made learn_struct_mcmc work in matlab version 5.2 (thanks to
|
wolffd@0
|
1301 Nimrod Megiddo for finding this bug).
|
wolffd@0
|
1302 <li> Made 'acyclic.m' work for undirected graphs.
|
wolffd@0
|
1303 </ul>
|
wolffd@0
|
1304
|
wolffd@0
|
1305
|
wolffd@0
|
1306 <li> 5/23/01
|
wolffd@0
|
1307 <ul>
|
wolffd@0
|
1308 <li> Added Tamar Kushnir's code for the IC* algorithm
|
wolffd@0
|
1309 (learn_struct_pdag_ic_star). This learns the
|
wolffd@0
|
1310 structure of a PDAG, and can identify the presence of latent
|
wolffd@0
|
1311 variables.
|
wolffd@0
|
1312
|
wolffd@0
|
1313 <li> Added Yair Weiss's code for computing the MAP assignment using
|
wolffd@0
|
1314 junction tree (i.e., a new method called @dpot/marginalize_pot_max
|
wolffd@0
|
1315 instead of marginalize_pot.)
|
wolffd@0
|
1316
|
wolffd@0
|
1317 <li> Added @discrete_CPD/prob_node in addition to log_prob_node to handle
|
wolffd@0
|
1318 deterministic CPDs.
|
wolffd@0
|
1319 </ul>
|
wolffd@0
|
1320
|
wolffd@0
|
1321
|
wolffd@0
|
1322 <li> 5/12/01
|
wolffd@0
|
1323 <ul>
|
wolffd@0
|
1324 <li> Pierpaolo Brutti updated his mlp and dsoftmax CPD classes,
|
wolffd@0
|
1325 and improved the HME code.
|
wolffd@0
|
1326
|
wolffd@0
|
1327 <li> HME example now added to web page. (The previous example was non-hierarchical.)
|
wolffd@0
|
1328
|
wolffd@0
|
1329 <li> Philippe Leray (author of the French documentation for BNT)
|
wolffd@0
|
1330 pointed out that I was including netlab.tar unnecessarily.
|
wolffd@0
|
1331 </ul>
|
wolffd@0
|
1332
|
wolffd@0
|
1333
|
wolffd@0
|
1334 <li> 5/4/01
|
wolffd@0
|
1335 <ul>
|
wolffd@0
|
1336 <li> Added mlp_CPD which defines a CPD as a (conditional) multi-layer perceptron.
|
wolffd@0
|
1337 This class was written by Pierpaolo Brutti.
|
wolffd@0
|
1338
|
wolffd@0
|
1339 <li> Added hierarchical mixtures of experts demo (due to Pierpaolo Brutti).
|
wolffd@0
|
1340
|
wolffd@0
|
1341 <li> Fixed some bugs in dsoftmax_CPD.
|
wolffd@0
|
1342
|
wolffd@0
|
1343 <li> Now the BNT distribution includes the whole
|
wolffd@0
|
1344 <a href="http://www.ncrg.aston.ac.uk/netlab/">Netlab</a> library in a
|
wolffd@0
|
1345 subdirectory.
|
wolffd@0
|
1346 It also includes my HMM and Kalman filter toolboxes, instead of just
|
wolffd@0
|
1347 fragments of them.
|
wolffd@0
|
1348 </ul>
|
wolffd@0
|
1349
|
wolffd@0
|
1350
|
wolffd@0
|
1351 <li> 5/2/01
|
wolffd@0
|
1352 <ul>
|
wolffd@0
|
1353 <li> gaussian_inf_engine/enter_evidence now correctly returns the
|
wolffd@0
|
1354 loglik, even if all nodes are instantiated (bug fix due to
|
wolffd@0
|
1355 Michael Robert James).
|
wolffd@0
|
1356
|
wolffd@0
|
1357 <li> Added dsoftmax_CPD which allows softmax nodes to have discrete
|
wolffd@0
|
1358 and continuous parents; the discrete parents act as indices into the
|
wolffd@0
|
1359 parameters for the continuous node, by analogy with conditional
|
wolffd@0
|
1360 Gaussian nodes. This class was written by Pierpaolo Brutti.
|
wolffd@0
|
1361 </ul>
|
wolffd@0
|
1362
|
wolffd@0
|
1363
|
wolffd@0
|
1364 <li> 3/27/01
|
wolffd@0
|
1365 <ul>
|
wolffd@0
|
1366 <li> learn_struct_mcmc no longer returns sampled_bitv.
|
wolffd@0
|
1367 <li> Added mcmc_sample_to_hist to post-process the set of samples.
|
wolffd@0
|
1368 </ul>
|
wolffd@0
|
1369
|
wolffd@0
|
1370 <li> 3/21/01
|
wolffd@0
|
1371 <ul>
|
wolffd@0
|
1372 <li> Changed license from UC to GNU Library GPL.
|
wolffd@0
|
1373
|
wolffd@0
|
1374 <li> Made all CPD constructors accept 0 arguments, so now bnets can be
|
wolffd@0
|
1375 saved to and loaded from files.
|
wolffd@0
|
1376
|
wolffd@0
|
1377 <li> Improved the implementation of sequential and batch Bayesian
|
wolffd@0
|
1378 parameter learning for tabular CPDs with completely observed data (see
|
wolffd@0
|
1379 log_marg_lik_complete and update_params_complete). This code also
|
wolffd@0
|
1380 handles interventional data.
|
wolffd@0
|
1381
|
wolffd@0
|
1382 <li> Added MCMC structure learning for completely observed, discrete,
|
wolffd@0
|
1383 static BNs.
|
wolffd@0
|
1384
|
wolffd@0
|
1385 <li> Started implementing Bayesian estimation of linear Gaussian
|
wolffd@0
|
1386 nodes. See root_gaussian_CPD and
|
wolffd@0
|
1387 linear_gaussian_CPD. The old gaussian_CPD class has not been changed.
|
wolffd@0
|
1388
|
wolffd@0
|
1389 <li> Renamed evaluate_CPD to log_prob_node, and simplified its
|
wolffd@0
|
1390 arguments.
|
wolffd@0
|
1391
|
wolffd@0
|
1392 <li> Renamed sample_CPD to sample_node, simplified its
|
wolffd@0
|
1393 arguments, and vectorized it.
|
wolffd@0
|
1394
|
wolffd@0
|
1395 <li> Renamed "learn_params_tabular" to "update_params_complete".
|
wolffd@0
|
1396 This does Bayesian updating, but no longer computes the BIC score.
|
wolffd@0
|
1397
|
wolffd@0
|
1398 <li> Made routines for completely observed networks (sampling,
|
wolffd@0
|
1399 complete data likelihood, etc.) handle cell arrays or regular arrays,
|
wolffd@0
|
1400 which are faster.
|
wolffd@0
|
1401 If some nodes are not scalars, or are hidden, you must use cell arrays.
|
wolffd@0
|
1402 You must convert to a cell array before passing to an inference routine.
|
wolffd@0
|
1403
|
wolffd@0
|
1404 <li> Fixed bug in gaussian_CPD constructor. When creating CPD with
|
wolffd@0
|
1405 more than 1 discrete parent with random parameters, the matrices were
|
wolffd@0
|
1406 the wrong shape (Bug fix due to Xuejing Sun).
|
wolffd@0
|
1407 </ul>
|
wolffd@0
|
1408
|
wolffd@0
|
1409
|
wolffd@0
|
1410
|
wolffd@0
|
1411 <li> 11/24/00
|
wolffd@0
|
1412 <ul>
|
wolffd@0
|
1413 <li> Renamed learn_params and learn_params_dbn to learn_params_em/
|
wolffd@0
|
1414 learn_params_dbn_em. The return arguments are now [bnet, LLtrace,
|
wolffd@0
|
1415 engine] instead of [engine, LLtrace].
|
wolffd@0
|
1416 <li> Added structure learning code for static nets (K2, PC).
|
wolffd@0
|
1417 <li> Renamed learn_struct_inter_full_obs as learn_struct_dbn_reveal,
|
wolffd@0
|
1418 and reimplemented it to make it simpler and faster.
|
wolffd@0
|
1419 <li> Added sequential Bayesian parameter learning (learn_params_tabular).
|
wolffd@0
|
1420 <li> Major rewrite of the documentation.
|
wolffd@0
|
1421 </ul>
|
wolffd@0
|
1422
|
wolffd@0
|
1423 <!--
|
wolffd@0
|
1424 <li> 6/1/00
|
wolffd@0
|
1425 <ul>
|
wolffd@0
|
1426 <li> Subtracted 1911 off the counter, so now it counts hits from
|
wolffd@0
|
1427 5/22/00. (The initial value of 1911 was a conservative lower bound on the number of
|
wolffd@0
|
1428 hits from the time the page was created.)
|
wolffd@0
|
1429 </ul>
|
wolffd@0
|
1430 -->
|
wolffd@0
|
1431
|
wolffd@0
|
1432 <li> 5/22/00
|
wolffd@0
|
1433 <ul>
|
wolffd@0
|
1434 <li> Added online filtering and prediction.
|
wolffd@0
|
1435 <li> Added the factored frontier and loopy_dbn algorithms.
|
wolffd@0
|
1436 <li> Separated the online user manual into two, for static and dynamic
|
wolffd@0
|
1437 networks.
|
wolffd@0
|
1438 <!--
|
wolffd@0
|
1439 <li> Added a counter to the BNT web page, and initialized it to 1911,
|
wolffd@0
|
1440 which is the number of people who have downloaded my software (BNT and
|
wolffd@0
|
1441 other toolboxes) since 8/24/98.
|
wolffd@0
|
1442 -->
|
wolffd@0
|
1443 <li> Added a counter to the BNT web page.
|
wolffd@0
|
1444 <!--
|
wolffd@0
|
1445 Up to this point, 1911 people had downloaded my software (BNT and
|
wolffd@0
|
1446 other toolboxes) since 8/24/98.
|
wolffd@0
|
1447 -->
|
wolffd@0
|
1448 </ul>
|
wolffd@0
|
1449
|
wolffd@0
|
1450
|
wolffd@0
|
1451 <li> 4/27/00
|
wolffd@0
|
1452 <ul>
|
wolffd@0
|
1453 <li> Fixed the typo in bat1.m
|
wolffd@0
|
1454 <li> Added preliminary code for online inference in DBNs
|
wolffd@0
|
1455 <li> Added coupled HMM example
|
wolffd@0
|
1456 </ul>
|
wolffd@0
|
1457
|
wolffd@0
|
1458 <li> 4/23/00
|
wolffd@0
|
1459 <ul>
|
wolffd@0
|
1460 <li> Fixed the bug in the fast inference routines where the indices
|
wolffd@0
|
1461 are empty (arises in bat1.m).
|
wolffd@0
|
1462 <li> Sped up marginal_family for the fast engines by precomputing indices.
|
wolffd@0
|
1463 </ul>
|
wolffd@0
|
1464
|
wolffd@0
|
1465 <li> 4/17/00
|
wolffd@0
|
1466 <ul>
|
wolffd@0
|
1467 <li> Simplified implementation of BK_inf_engine by using soft evidence.
|
wolffd@0
|
1468 <li> Added jtree_onepass_inf_engine (which computes a single marginal)
|
wolffd@0
|
1469 and modified jtree_dbn_fast to use it.
|
wolffd@0
|
1470 </ul>
|
wolffd@0
|
1471
|
wolffd@0
|
1472 <li> 4/14/00
|
wolffd@0
|
1473 <ul>
|
wolffd@0
|
1474 <li> Added fast versions of jtree and BK, which are
|
wolffd@0
|
1475 designed for models where the division into hidden/observed is fixed,
|
wolffd@0
|
1476 and all hidden variables are discrete. These routines are 2-3 times
|
wolffd@0
|
1477 faster than their non-fast counterparts.
|
wolffd@0
|
1478
|
wolffd@0
|
1479 <li> Added graph drawing code
|
wolffd@0
|
1480 contributed by Ali Taylan Cemgil from the University of Nijmegen.
|
wolffd@0
|
1481 </ul>
|
wolffd@0
|
1482
|
wolffd@0
|
1483 <li> 4/10/00
|
wolffd@0
|
1484 <ul>
|
wolffd@0
|
1485 <li> Distinguished cnodes and cnodes_slice in DBNs so that kalman1
|
wolffd@0
|
1486 works with BK.
|
wolffd@0
|
1487 <li> Removed dependence on cellfun (which only exists in matlab 5.3)
|
wolffd@0
|
1488 by adding isemptycell. Now the code works in 5.2.
|
wolffd@0
|
1489 <li> Changed the UC copyright notice.
|
wolffd@0
|
1490 </ul>
|
wolffd@0
|
1491
|
wolffd@0
|
1492
|
wolffd@0
|
1493
|
wolffd@0
|
1494 <li> 3/29/00
|
wolffd@0
|
1495 <ul>
|
wolffd@0
|
1496 <li><b>Released BNT 2.0</b>, now with objects!
|
wolffd@0
|
1497 Here are the major changes.
|
wolffd@0
|
1498
|
wolffd@0
|
1499 <li> There are now 3 classes of objects in BNT:
|
wolffd@0
|
1500 Conditional Probability Distributions, potentials (for junction tree),
|
wolffd@0
|
1501 and inference engines.
|
wolffd@0
|
1502 Making an inference algorithm (junction tree, sampling, loopy belief
|
wolffd@0
|
1503 propagation, etc.) an object might seem counter-intuitive, but in
|
wolffd@0
|
1504 fact turns out to be a good idea, since the code and documentation
|
wolffd@0
|
1505 can be made modular.
|
wolffd@0
|
1506 (In Java, each algorithm would be a class that implements the
|
wolffd@0
|
1507 inferenceEngine interface. Since Matlab doesn't support interfaces,
|
wolffd@0
|
1508 inferenceEngine is an abstract (virtual) base class.)
|
wolffd@0
|
1509
|
wolffd@0
|
1510 <p>
|
wolffd@0
|
1511 <li>
|
wolffd@0
|
1512 In version 1, instead of Matlab's built-in objects,
|
wolffd@0
|
1513 I used structs and a
|
wolffd@0
|
1514 simulated dispatch mechanism based on the type-tag system in the
|
wolffd@0
|
1515 classic textbook by Abelson
|
wolffd@0
|
1516 and Sussman ("Structure and Interpretation of Computer Programs",
|
wolffd@0
|
1517 MIT Press, 1985). This required editing the dispatcher every time a
|
wolffd@0
|
1518 new object type was added. It also required unique (and hence long)
|
wolffd@0
|
1519 names for each method, and allowed the user unrestricted access to
|
wolffd@0
|
1520 the internal state of objects.
|
wolffd@0
|
1521
|
wolffd@0
|
1522 <p>
|
wolffd@0
|
1523 <li> The Bayes net itself is now a lightweight struct, and can be
|
wolffd@0
|
1524 used to specify a model independently of the inference algorithm used
|
wolffd@0
|
1525 to process it.
|
wolffd@0
|
1526 In version 1, the inference engine was stored inside the Bayes net.
|
wolffd@0
|
1527
|
wolffd@0
|
1528 <!--
|
wolffd@0
|
1529 See the list of <a href="differences2.html">changes from version
|
wolffd@0
|
1530 1</a>.
|
wolffd@0
|
1531 -->
|
wolffd@0
|
1532 </ul>
|
wolffd@0
|
1533
|
wolffd@0
|
1534
|
wolffd@0
|
1535
|
wolffd@0
|
1536 <li> 11/24/99
|
wolffd@0
|
1537 <ul>
|
wolffd@0
|
1538 <li> Added fixed lag smoothing, online EM and the ability to learn
|
wolffd@0
|
1539 switching HMMs (POMDPs) to the HMM toolbox.
|
wolffd@0
|
1540 <li> Renamed the HMM toolbox function 'mk_dhmm_obs_mat' to
|
wolffd@0
|
1541 'mk_dhmm_obs_lik', and similarly for ghmm and mhmm. Updated references
|
wolffd@0
|
1542 to these functions in BNT.
|
wolffd@0
|
1543 <li> Changed the order of return params from kalman_filter to make it
|
wolffd@0
|
1544 more natural. Updated references to this function in BNT.
|
wolffd@0
|
1545 </ul>
|
wolffd@0
|
1546
|
wolffd@0
|
1547
|
wolffd@0
|
1548
|
wolffd@0
|
1549 <li>10/27/99
|
wolffd@0
|
1550 <ul>
|
wolffd@0
|
1551 <li>Fixed line 42 of potential/cg/marginalize_cgpot and lines 32-39 of bnet/add_evidence_to_marginal
|
wolffd@0
|
1552 (thanks to Rainer Deventer for spotting these bugs!)
|
wolffd@0
|
1553 </ul>
|
wolffd@0
|
1554
|
wolffd@0
|
1555
|
wolffd@0
|
1556 <li>10/21/99
|
wolffd@0
|
1557 <ul>
|
wolffd@0
|
1558 <li>Completely changed the blockmatrix class to make its semantics
|
wolffd@0
|
1559 more sensible. The constructor is not backwards compatible!
|
wolffd@0
|
1560 </ul>
|
wolffd@0
|
1561
|
wolffd@0
|
1562 <li>10/6/99
|
wolffd@0
|
1563 <ul>
|
wolffd@0
|
1564 <li>Fixed all_vals = cat(1, vals{:}) in user/enter_evidence
|
wolffd@0
|
1565 <li>Vectorized ind2subv and sub2indv and removed the C versions.
|
wolffd@0
|
1566 <li>Made mk_CPT_from_mux_node much faster by having it call vectorized
|
wolffd@0
|
1567 ind2subv
|
wolffd@0
|
1568 <li>Added Sondhauss's bug fix to line 68 of bnet/add_evidence_to_marginal
|
wolffd@0
|
1569 <li>In dbn/update_belief_state, instead of adding eps to likelihood if 0,
|
wolffd@0
|
1570 we leave it at 0, and set the scale factor to 0 instead of dividing.
|
wolffd@0
|
1571 </ul>
|
wolffd@0
|
1572
|
wolffd@0
|
1573 <li>8/19/99
|
wolffd@0
|
1574 <ul>
|
wolffd@0
|
1575 <li>Added Ghahramani's mfa code to examples directory to compare with
|
wolffd@0
|
1576 fa1, which uses BNT
|
wolffd@0
|
1577 <li>Changed all references of assoc to stringmatch (e.g., in
|
wolffd@0
|
1578 examples/mk_bat_topology)
|
wolffd@0
|
1579 </ul>
|
wolffd@0
|
1580
|
wolffd@0
|
1581 <li>June 1999
|
wolffd@0
|
1582 <ul>
|
wolffd@0
|
1583 <li><b>Released BNT 1.0</b> on the web.
|
wolffd@0
|
1584 </ul>
|
wolffd@0
|
1585
|
wolffd@0
|
1586
|
wolffd@0
|
1587 <li>August 1998
|
wolffd@0
|
1588 <ul>
|
wolffd@0
|
1589 <li><b>Released BNT 0.0</b> via email.
|
wolffd@0
|
1590 </ul>
|
wolffd@0
|
1591
|
wolffd@0
|
1592
|
wolffd@0
|
1593 <li>October 1997
|
wolffd@0
|
1594 <ul>
|
wolffd@0
|
1595 <li>First started working on Matlab version of BNT.
|
wolffd@0
|
1596 </ul>
|
wolffd@0
|
1597
|
wolffd@0
|
1598 <li>Summer 1997
|
wolffd@0
|
1599 <ul>
|
wolffd@0
|
1600 <li> First started working on C++ version of BNT while working at DEC (now Compaq) CRL.
|
wolffd@0
|
1601 </ul>
|
wolffd@0
|
1602
|
wolffd@0
|
1603 <!--
|
wolffd@0
|
1604 <li>Fall 1996
|
wolffd@0
|
1605 <ul>
|
wolffd@0
|
1606 <li>Made a C++ program that generates DBN-specific C++ code
|
wolffd@0
|
1607 for inference using the frontier algorithm.
|
wolffd@0
|
1608 </ul>
|
wolffd@0
|
1609
|
wolffd@0
|
1610 <li>Fall 1995
|
wolffd@0
|
1611 <ul>
|
wolffd@0
|
1612 <li>Arrive in Berkeley, and first learn about Bayes Nets. Start using
|
wolffd@0
|
1613 Geoff Zweig's C++ code.
|
wolffd@0
|
1614 </ul>
|
wolffd@0
|
1615 -->
|
wolffd@0
|
1616
|
wolffd@0
|
1617 </ul>
|