Daniel@0: The following kinds of potentials are supported Daniel@0: - dpot: discrete Daniel@0: - upot: utility Daniel@0: - mpot: Gaussian in moment form Daniel@0: - cpot: Gaussian in canonical form Daniel@0: - cgpot: conditional (mixture) Gaussian, a list of mpots/cpot Daniel@0: - scgpot: stable conditional Gaussian, a list of scgcpots Daniel@0: - scgcpot: just used by scgpot Daniel@0: Daniel@0: Many of these are described in the following book Daniel@0: Daniel@0: @book{Cowell99, Daniel@0: author = "R. G. Cowell and A. P. Dawid and S. L. Lauritzen and D. J. Spiegelhalter", Daniel@0: title = "Probabilistic Networks and Expert Systems", Daniel@0: year = 1999, Daniel@0: publisher = "Springer" Daniel@0: } Daniel@0: Daniel@0: CPD_to_pot converts P(Z|A,B,...) to phi(A,B,...,Z). Daniel@0: Daniel@0: A table is like a dpot, except it is a structure, not an object. Daniel@0: Code that uses tables is faster but less flexible. Daniel@0: Daniel@0: ----------- Daniel@0: Daniel@0: A potential is a joint probability distribution on a set of nodes, Daniel@0: which we call the potential's domain (which is always sorted). Daniel@0: A potential supports the operations of multiplication and Daniel@0: marginalization. Daniel@0: Daniel@0: If the nodes are discrete, the potential can be represented as a table Daniel@0: (multi-dimensional array). If the nodes are Gaussian, the potential Daniel@0: can be represented as a quadratic form. If there are both discrete and Daniel@0: Gaussian nodes, we use a table of quadratic forms. For details on the Daniel@0: Gaussian case, see below. Daniel@0: Daniel@0: For discrete potentials, the 'sizes' field specifies the number of Daniel@0: values each node in the domain can take on. For continuous potentials, Daniel@0: the 'sizes' field specifies the block-size of each node. Daniel@0: Daniel@0: If some of the nodes are observed, extra complications arise. We Daniel@0: handle the discrete and continuous cases differently. Suppose the Daniel@0: domain is [X Y], with sizes [6 2], where X is observed to have value x. Daniel@0: In the discrete case, the potential will have many zeros in it Daniel@0: (T(X,:) will be 0 for all X ~= x), which can be inefficient. Instead, Daniel@0: we set sizes to [1 2], to indicate that X has only one possible value Daniel@0: (namely x). For continuous nodes, we set sizes = [0 2], to indicate that X no Daniel@0: longer appears in the mean vector or covariance matrix (we must avoid Daniel@0: 0s in Sigma, lest it be uninvertible). When a potential is created, we Daniel@0: assume the sizes of the nodes have been adjusted to include the Daniel@0: evidence. This is so that the evidence can be incorporated at the Daniel@0: outset, and thereafter the inference algorithms can ignore it. Daniel@0: Daniel@0: ------------ Daniel@0: Daniel@0: A Gaussian potential can be represented in terms of its Daniel@0: moment characteristics (mu, Sigma, logp), or in terms of its canonical Daniel@0: characteristics (g, h, K). Although the moment characteristics are Daniel@0: more familiar, it turns out that canonical characteristics are Daniel@0: more convenient for the junction tree algorithm, for the same kinds of Daniel@0: reasons why backwards inference in an LDS uses the information form of Daniel@0: the Kalman filter (see Murphy (1998a) for a discussion). Daniel@0: Daniel@0: When working with *conditional* Gaussian potentials, the method proposed Daniel@0: by Lauritzen (1992), and implemented here, requires converting from Daniel@0: canonical to moment form before marginalizing the discrete variables, Daniel@0: and converting back from moment to canonical form before Daniel@0: multiplying/dividing. A new algorithm, due to Lauritzen and Jensen Daniel@0: (1999), works exclusively in moment form, and Daniel@0: hence is more numerically stable. It can also handle 0s in the Daniel@0: covariance matrix, i.e., deterministic relationships between cts Daniel@0: variables. However, it has not yet been implemented, Daniel@0: since it requires major changes to the jtree algorithm. Daniel@0: Daniel@0: In Murphy (1998b) we extend Lauritzen (1992) to handle Daniel@0: vector-valued nodes. This means the vectors and matrices become block Daniel@0: vectors and matrices. This manifests itself in the code as in the Daniel@0: following example. Daniel@0: Suppose we have a potential on nodes dom=[3,4,7] with block sizes=[2,1,3]. Daniel@0: Then nodes 3 and 7 correspond to blocks 1,3 which correspond to indices 1,2,4,5,6. Daniel@0: >> find_equiv_posns([3 7], dom)=[1,3] Daniel@0: >> block([1,3],blocks)=[1,2,4,5,6]. Daniel@0: Daniel@0: For more details, see Daniel@0: Daniel@0: - "Filtering and Smoothing in Linear Dynamical Systems using the Junction Tree Algorithm", Daniel@0: K. Murphy, 1998a. UCB Tech Report. Daniel@0: Daniel@0: - "Inference and learning in hybrid Bayesian networks", Daniel@0: K. Murphy. UCB Technical Report CSD-98-990, 1998b. Daniel@0: Daniel@0: - "Propagation of probabilities, means and variances in mixed Daniel@0: graphical association models", S. L. Lauritzen, 1992, JASA 87(420):1098--1108. Daniel@0: Daniel@0: - "Causal probabilistic networks with both discrete and continuous variables", Daniel@0: K. G. Olesen, 1993. PAMI 3(15). This discusses implementation details. Daniel@0: Daniel@0: - "Stable local computation with Conditional Gaussian distributions", Daniel@0: S. Lauritzen and F. Jensen, 1999. Univ. Aalborg Tech Report R-99-2014. Daniel@0: www.math.auc.dk/research/Reports.html.