wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0: - BNT supports many types of
wolffd@0: conditional probability distributions (nodes),
wolffd@0: and it is easy to add more.
wolffd@0:
wolffd@0: - Tabular (multinomial)
wolffd@0:
- Gaussian
wolffd@0:
- Softmax (logistic/ sigmoid)
wolffd@0:
- Multi-layer perceptron (neural network)
wolffd@0:
- Noisy-or
wolffd@0:
- Deterministic
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0:
- BNT supports decision and utility nodes, as well as chance
wolffd@0: nodes,
wolffd@0: i.e., influence diagrams as well as Bayes nets.
wolffd@0:
wolffd@0:
wolffd@0:
- BNT supports static and dynamic BNs (useful for modelling dynamical systems
wolffd@0: and sequence data).
wolffd@0:
wolffd@0:
wolffd@0:
- BNT supports many different inference algorithms,
wolffd@0: and it is easy to add more.
wolffd@0:
wolffd@0:
wolffd@0: - Exact inference for static BNs:
wolffd@0:
wolffd@0: - junction tree
wolffd@0:
- variable elimination
wolffd@0:
- brute force enumeration (for discrete nets)
wolffd@0:
- linear algebra (for Gaussian nets)
wolffd@0:
- Pearl's algorithm (for polytrees)
wolffd@0:
- quickscore (for QMR)
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0:
- Approximate inference for static BNs:
wolffd@0:
wolffd@0: - likelihood weighting
wolffd@0:
- Gibbs sampling
wolffd@0:
- loopy belief propagation
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0:
- Exact inference for DBNs:
wolffd@0:
wolffd@0: - junction tree
wolffd@0:
- frontier algorithm
wolffd@0:
- forwards-backwards (for HMMs)
wolffd@0:
- Kalman-RTS (for LDSs)
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0:
- Approximate inference for DBNs:
wolffd@0:
wolffd@0: - Boyen-Koller
wolffd@0:
- factored-frontier/loopy belief propagation
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0:
-
wolffd@0: BNT supports several methods for parameter learning,
wolffd@0: and it is easy to add more.
wolffd@0:
wolffd@0:
wolffd@0: - Batch MLE/MAP parameter learning using EM.
wolffd@0: (Each node type has its own M method, e.g. softmax nodes use IRLS,
wolffd@0: and each inference engine has its own E method, so the code is fully modular.)
wolffd@0:
wolffd@0: - Sequential/batch Bayesian parameter learning (for fully observed tabular nodes only).
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0:
-
wolffd@0: BNT supports several methods for regularization,
wolffd@0: and it is easy to add more.
wolffd@0:
wolffd@0: - Any node can have its parameters clamped (made non-adjustable).
wolffd@0:
- Any set of compatible nodes can have their parameters tied (c.f.,
wolffd@0: weight sharing in a neural net).
wolffd@0:
- Some node types (e.g., tabular) supports priors for MAP estimation.
wolffd@0:
- Gaussian covariance matrices can be declared full or diagonal, and can
wolffd@0: be tied across states of their discrete parents (if any).
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0:
-
wolffd@0: BNT supports several methods for structure learning,
wolffd@0: and it is easy to add more.
wolffd@0:
wolffd@0:
wolffd@0: - Bayesian structure learning,
wolffd@0: using MCMC or local search (for fully observed tabular nodes only).
wolffd@0:
wolffd@0:
- Constraint-based structure learning (IC/PC and IC*/FCI).
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0:
- The source code is extensively documented, object-oriented, and free, making it
wolffd@0: an excellent tool for teaching, research and rapid prototyping.
wolffd@0:
wolffd@0:
wolffd@0:
wolffd@0: