Daniel@0:
Daniel@0:
Daniel@0:
Daniel@0:
Daniel@0: - BNT supports many types of
Daniel@0: conditional probability distributions (nodes),
Daniel@0: and it is easy to add more.
Daniel@0:
Daniel@0: - Tabular (multinomial)
Daniel@0:
- Gaussian
Daniel@0:
- Softmax (logistic/ sigmoid)
Daniel@0:
- Multi-layer perceptron (neural network)
Daniel@0:
- Noisy-or
Daniel@0:
- Deterministic
Daniel@0:
Daniel@0:
Daniel@0:
Daniel@0:
- BNT supports decision and utility nodes, as well as chance
Daniel@0: nodes,
Daniel@0: i.e., influence diagrams as well as Bayes nets.
Daniel@0:
Daniel@0:
Daniel@0:
- BNT supports static and dynamic BNs (useful for modelling dynamical systems
Daniel@0: and sequence data).
Daniel@0:
Daniel@0:
Daniel@0:
- BNT supports many different inference algorithms,
Daniel@0: and it is easy to add more.
Daniel@0:
Daniel@0:
Daniel@0: - Exact inference for static BNs:
Daniel@0:
Daniel@0: - junction tree
Daniel@0:
- variable elimination
Daniel@0:
- brute force enumeration (for discrete nets)
Daniel@0:
- linear algebra (for Gaussian nets)
Daniel@0:
- Pearl's algorithm (for polytrees)
Daniel@0:
- quickscore (for QMR)
Daniel@0:
Daniel@0:
Daniel@0:
Daniel@0:
- Approximate inference for static BNs:
Daniel@0:
Daniel@0: - likelihood weighting
Daniel@0:
- Gibbs sampling
Daniel@0:
- loopy belief propagation
Daniel@0:
Daniel@0:
Daniel@0:
Daniel@0:
- Exact inference for DBNs:
Daniel@0:
Daniel@0: - junction tree
Daniel@0:
- frontier algorithm
Daniel@0:
- forwards-backwards (for HMMs)
Daniel@0:
- Kalman-RTS (for LDSs)
Daniel@0:
Daniel@0:
Daniel@0:
Daniel@0:
- Approximate inference for DBNs:
Daniel@0:
Daniel@0: - Boyen-Koller
Daniel@0:
- factored-frontier/loopy belief propagation
Daniel@0:
Daniel@0:
Daniel@0:
Daniel@0:
Daniel@0:
Daniel@0:
-
Daniel@0: BNT supports several methods for parameter learning,
Daniel@0: and it is easy to add more.
Daniel@0:
Daniel@0:
Daniel@0: - Batch MLE/MAP parameter learning using EM.
Daniel@0: (Each node type has its own M method, e.g. softmax nodes use IRLS,
Daniel@0: and each inference engine has its own E method, so the code is fully modular.)
Daniel@0:
Daniel@0: - Sequential/batch Bayesian parameter learning (for fully observed tabular nodes only).
Daniel@0:
Daniel@0:
Daniel@0:
Daniel@0:
Daniel@0:
-
Daniel@0: BNT supports several methods for regularization,
Daniel@0: and it is easy to add more.
Daniel@0:
Daniel@0: - Any node can have its parameters clamped (made non-adjustable).
Daniel@0:
- Any set of compatible nodes can have their parameters tied (c.f.,
Daniel@0: weight sharing in a neural net).
Daniel@0:
- Some node types (e.g., tabular) supports priors for MAP estimation.
Daniel@0:
- Gaussian covariance matrices can be declared full or diagonal, and can
Daniel@0: be tied across states of their discrete parents (if any).
Daniel@0:
Daniel@0:
Daniel@0:
Daniel@0:
-
Daniel@0: BNT supports several methods for structure learning,
Daniel@0: and it is easy to add more.
Daniel@0:
Daniel@0:
Daniel@0: - Bayesian structure learning,
Daniel@0: using MCMC or local search (for fully observed tabular nodes only).
Daniel@0:
Daniel@0:
- Constraint-based structure learning (IC/PC and IC*/FCI).
Daniel@0:
Daniel@0:
Daniel@0:
Daniel@0:
Daniel@0:
- The source code is extensively documented, object-oriented, and free, making it
Daniel@0: an excellent tool for teaching, research and rapid prototyping.
Daniel@0:
Daniel@0:
Daniel@0:
Daniel@0: