comparison toolboxes/FullBNT-1.0.7/docs/majorFeatures.html @ 0:e9a9cd732c1e tip

first hg version after svn
author wolffd
date Tue, 10 Feb 2015 15:05:51 +0000
parents
children
comparison
equal deleted inserted replaced
-1:000000000000 0:e9a9cd732c1e
1
2 <h2><a name="features">Major features</h2>
3 <ul>
4
5 <li> BNT supports many types of
6 <b>conditional probability distributions</b> (nodes),
7 and it is easy to add more.
8 <ul>
9 <li>Tabular (multinomial)
10 <li>Gaussian
11 <li>Softmax (logistic/ sigmoid)
12 <li>Multi-layer perceptron (neural network)
13 <li>Noisy-or
14 <li>Deterministic
15 </ul>
16 <p>
17
18 <li> BNT supports <b>decision and utility nodes</b>, as well as chance
19 nodes,
20 i.e., influence diagrams as well as Bayes nets.
21 <p>
22
23 <li> BNT supports static and dynamic BNs (useful for modelling dynamical systems
24 and sequence data).
25 <p>
26
27 <li> BNT supports many different <b>inference algorithms</b>,
28 and it is easy to add more.
29
30 <ul>
31 <li> Exact inference for static BNs:
32 <ul>
33 <li>junction tree
34 <li>variable elimination
35 <li>brute force enumeration (for discrete nets)
36 <li>linear algebra (for Gaussian nets)
37 <li>Pearl's algorithm (for polytrees)
38 <li>quickscore (for QMR)
39 </ul>
40
41 <p>
42 <li> Approximate inference for static BNs:
43 <ul>
44 <li>likelihood weighting
45 <li> Gibbs sampling
46 <li>loopy belief propagation
47 </ul>
48
49 <p>
50 <li> Exact inference for DBNs:
51 <ul>
52 <li>junction tree
53 <li>frontier algorithm
54 <li>forwards-backwards (for HMMs)
55 <li>Kalman-RTS (for LDSs)
56 </ul>
57
58 <p>
59 <li> Approximate inference for DBNs:
60 <ul>
61 <li>Boyen-Koller
62 <li>factored-frontier/loopy belief propagation
63 </ul>
64
65 </ul>
66 <p>
67
68 <li>
69 BNT supports several methods for <b>parameter learning</b>,
70 and it is easy to add more.
71 <ul>
72
73 <li> Batch MLE/MAP parameter learning using EM.
74 (Each node type has its own M method, e.g. softmax nodes use IRLS,<br>
75 and each inference engine has its own E method, so the code is fully modular.)
76
77 <li> Sequential/batch Bayesian parameter learning (for fully observed tabular nodes only).
78 </ul>
79
80
81 <p>
82 <li>
83 BNT supports several methods for <b>regularization</b>,
84 and it is easy to add more.
85 <ul>
86 <li> Any node can have its parameters clamped (made non-adjustable).
87 <li> Any set of compatible nodes can have their parameters tied (c.f.,
88 weight sharing in a neural net).
89 <li> Some node types (e.g., tabular) supports priors for MAP estimation.
90 <li> Gaussian covariance matrices can be declared full or diagonal, and can
91 be tied across states of their discrete parents (if any).
92 </ul>
93
94 <p>
95 <li>
96 BNT supports several methods for <b>structure learning</b>,
97 and it is easy to add more.
98 <ul>
99
100 <li> Bayesian structure learning,
101 using MCMC or local search (for fully observed tabular nodes only).
102
103 <li> Constraint-based structure learning (IC/PC and IC*/FCI).
104 </ul>
105
106
107 <p>
108 <li> The source code is extensively documented, object-oriented, and free, making it
109 an excellent tool for teaching, research and rapid prototyping.
110
111 </ul>
112
113