wolffd@0: wolffd@0: How to use the Bayes Net Toolbox wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0:

How to use the Bayes Net Toolbox

wolffd@0: wolffd@0: This documentation was last updated on 13 November 2002. wolffd@0:
wolffd@0: Click here for a list of changes made to wolffd@0: BNT. wolffd@0:
wolffd@0: Click wolffd@0: here wolffd@0: for a French version of this documentation (which might not wolffd@0: be up-to-date). wolffd@0: wolffd@0: wolffd@0:

wolffd@0: wolffd@0:

wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0:

Installation

wolffd@0: wolffd@0:

Installing the Matlab code

wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0: If you are new to Matlab, you might like to check out wolffd@0: some useful Matlab tips. wolffd@0: For instance, this explains how to create a startup file, which can be wolffd@0: used to set your path variable automatically, so you can avoid having wolffd@0: to type the above commands every time. wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0:

Installing the C code

wolffd@0: wolffd@0: Some BNT functions also have C implementations. wolffd@0: It is not necessary to install the C code, but it can result in a speedup wolffd@0: of a factor of 2-5 of certain simple functions. wolffd@0: To install all the C code, wolffd@0: edit installC_BNT.m so it contains the right path, wolffd@0: then type installC_BNT. wolffd@0: To uninstall all the C code, wolffd@0: edit uninstallC_BNT.m so it contains the right path, wolffd@0: then type uninstallC_BNT. wolffd@0: For an up-to-date list of the files which have C implementations, see wolffd@0: BNT/installC_BNT.m. wolffd@0: wolffd@0:

wolffd@0: mex is a script that lets you call C code from Matlab - it does not compile matlab to wolffd@0: C (see mcc below). wolffd@0: If your C/C++ compiler is set up correctly, mex should work out of wolffd@0: the box. (Matlab 6 now ships with its own minimal C compiler.) wolffd@0: If not, you might need to type wolffd@0:

wolffd@0: mex -setup wolffd@0:

wolffd@0: before calling installC. wolffd@0:

wolffd@0: To make mex call gcc on Windows, wolffd@0: you must install gnumex. wolffd@0: You can use the minimalist GNU for wolffd@0: Windows version of gcc, or wolffd@0: the cygwin version. wolffd@0:

wolffd@0: In general, typing wolffd@0: 'mex foo.c' from inside Matlab creates a file called wolffd@0: 'foo.mexglx' or 'foo.dll' (the exact file wolffd@0: extension is system dependent - on Linux it is 'mexglx', on Windows it is '.dll'). wolffd@0: The resulting file will hide the original 'foo.m' (if it existed), i.e., wolffd@0: typing 'foo' at the prompt will call the compiled C version. wolffd@0: To reveal the original matlab version, just delete foo.mexglx (this is wolffd@0: what uninstallC does). wolffd@0:

wolffd@0: Sometimes it takes time for Matlab to realize that the file has wolffd@0: changed from matlab to C or vice versa; try typing 'clear all' or wolffd@0: restarting Matlab to refresh it. wolffd@0: To find out which version of a file you are running, type wolffd@0: 'which foo'. wolffd@0:

wolffd@0: mcc, the wolffd@0: Matlab to C compiler, is a separate product, wolffd@0: and is quite different from mex. It does not yet support wolffd@0: objects/classes, which is why we can't compile all of BNT to C automatically. wolffd@0: Also, hand-written C code is usually much wolffd@0: better than the C code generated by mcc. wolffd@0: wolffd@0: wolffd@0:

wolffd@0: Acknowledgements: wolffd@0: Although I wrote some of the C code, most of wolffd@0: the C code (e.g., for jtree and dpot) was written by Wei Hu; wolffd@0: the triangulation C code was written by Ilya Shpitser. wolffd@0: wolffd@0: wolffd@0:

Creating your first Bayes net

wolffd@0: wolffd@0: To define a Bayes net, you must specify the graph structure and then wolffd@0: the parameters. We look at each in turn, using a simple example wolffd@0: (adapted from Russell and wolffd@0: Norvig, "Artificial Intelligence: a Modern Approach", Prentice Hall, wolffd@0: 1995, p454). wolffd@0: wolffd@0: wolffd@0:

Graph structure

wolffd@0: wolffd@0: wolffd@0: Consider the following network. wolffd@0: wolffd@0:

wolffd@0:

wolffd@0: wolffd@0:
wolffd@0:

wolffd@0: wolffd@0:

wolffd@0: To specify this directed acyclic graph (dag), we create an adjacency matrix: wolffd@0:

wolffd@0: N = 4; 
wolffd@0: dag = zeros(N,N);
wolffd@0: C = 1; S = 2; R = 3; W = 4;
wolffd@0: dag(C,[R S]) = 1;
wolffd@0: dag(R,W) = 1;
wolffd@0: dag(S,W)=1;
wolffd@0: 
wolffd@0:

wolffd@0: We have numbered the nodes as follows: wolffd@0: Cloudy = 1, Sprinkler = 2, Rain = 3, WetGrass = 4. wolffd@0: The nodes must always be numbered in topological order, i.e., wolffd@0: ancestors before descendants. wolffd@0: For a more complicated graph, this is a little inconvenient: we will wolffd@0: see how to get around this below. wolffd@0:

wolffd@0: wolffd@0: wolffd@0: wolffd@0:

Summary of CPD types

wolffd@0: wolffd@0: We list all the different types of CPDs supported by BNT. wolffd@0: For each CPD, we specify if the child and parents can be discrete (D) or wolffd@0: continuous (C) (Binary (B) nodes are a special case). wolffd@0: We also specify which methods each class supports. wolffd@0: If a method is inherited, the name of the parent class is mentioned. wolffd@0: If a parent class calls a child method, this is mentioned. wolffd@0:

wolffd@0: The CPD_to_CPT method converts a CPD to a table; this wolffd@0: requires that the child and all parents are discrete. wolffd@0: The CPT might be exponentially big... wolffd@0: convert_to_table evaluates a CPD with evidence, and wolffd@0: represents the the resulting potential as an array. wolffd@0: This requires that the child is discrete, and any continuous parents wolffd@0: are observed. wolffd@0: convert_to_pot evaluates a CPD with evidence, and wolffd@0: represents the resulting potential as a dpot, gpot, cgpot or upot, as wolffd@0: requested. (d=discrete, g=Gaussian, cg = conditional Gaussian, u = wolffd@0: utility). wolffd@0: wolffd@0:

wolffd@0: When we sample a node, all the parents are observed. wolffd@0: When we compute the (log) probability of a node, all the parents and wolffd@0: the child are observed. wolffd@0:

wolffd@0: We also specify if the parameters are learnable. wolffd@0: For learning with EM, we require wolffd@0: the methods reset_ess, update_ess and wolffd@0: maximize_params. wolffd@0: For learning from fully observed data, we require wolffd@0: the method learn_params. wolffd@0: By default, all classes inherit this from generic_CPD, which simply wolffd@0: calls update_ess N times, once for each data case, followed wolffd@0: by maximize_params, i.e., it is like EM, without the E step. wolffd@0: Some classes implement a batch formula, which is quicker. wolffd@0:

wolffd@0: Bayesian learning means computing a posterior over the parameters wolffd@0: given fully observed data. wolffd@0:

wolffd@0: Pearl means we implement the methods compute_pi and wolffd@0: compute_lambda_msg, used by wolffd@0: pearl_inf_engine, which runs on directed graphs. wolffd@0: belprop_inf_engine only needs convert_to_pot.H wolffd@0: The pearl methods can exploit special properties of the CPDs for wolffd@0: computing the messages efficiently, whereas belprop does not. wolffd@0:

wolffd@0: The only method implemented by generic_CPD is adjustable_CPD, wolffd@0: which is not shown, since it is not very interesting. wolffd@0: wolffd@0: wolffd@0:

wolffd@0: wolffd@0: wolffd@0: wolffd@0:
wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0:
Name wolffd@0: Child wolffd@0: Parents wolffd@0: Comments wolffd@0: CPD_to_CPT wolffd@0: conv_to_table wolffd@0: conv_to_pot wolffd@0: sample wolffd@0: prob wolffd@0: learn wolffd@0: Bayes wolffd@0: Pearl wolffd@0: wolffd@0: wolffd@0:
wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0:
boolean wolffd@0: B wolffd@0: B wolffd@0: Syntactic sugar for tabular wolffd@0: - wolffd@0: - wolffd@0: - wolffd@0: - wolffd@0: - wolffd@0: - wolffd@0: - wolffd@0: - wolffd@0: wolffd@0:
deterministic wolffd@0: D wolffd@0: D wolffd@0: Syntactic sugar for tabular wolffd@0: - wolffd@0: - wolffd@0: - wolffd@0: - wolffd@0: - wolffd@0: - wolffd@0: - wolffd@0: - wolffd@0: wolffd@0:
Discrete wolffd@0: D wolffd@0: C/D wolffd@0: Virtual class wolffd@0: N wolffd@0: Calls CPD_to_CPT wolffd@0: Calls conv_to_table wolffd@0: Calls conv_to_table wolffd@0: Calls conv_to_table wolffd@0: N wolffd@0: N wolffd@0: N wolffd@0: wolffd@0:
Gaussian wolffd@0: C wolffd@0: C/D wolffd@0: - wolffd@0: N wolffd@0: N wolffd@0: Y wolffd@0: Y wolffd@0: Y wolffd@0: Y wolffd@0: N wolffd@0: N wolffd@0: wolffd@0:
gmux wolffd@0: C wolffd@0: C/D wolffd@0: multiplexer wolffd@0: N wolffd@0: N wolffd@0: Y wolffd@0: N wolffd@0: N wolffd@0: N wolffd@0: N wolffd@0: Y wolffd@0: wolffd@0: wolffd@0:
MLP wolffd@0: D wolffd@0: C/D wolffd@0: multi layer perceptron wolffd@0: N wolffd@0: Y wolffd@0: Inherits from discrete wolffd@0: Inherits from discrete wolffd@0: Inherits from discrete wolffd@0: Y wolffd@0: N wolffd@0: N wolffd@0: wolffd@0: wolffd@0:
noisy-or wolffd@0: B wolffd@0: B wolffd@0: - wolffd@0: Y wolffd@0: Inherits from discrete wolffd@0: Inherits from discrete wolffd@0: Inherits from discrete wolffd@0: Inherits from discrete wolffd@0: N wolffd@0: N wolffd@0: Y wolffd@0: wolffd@0: wolffd@0:
root wolffd@0: C/D wolffd@0: none wolffd@0: no params wolffd@0: N wolffd@0: N wolffd@0: Y wolffd@0: Y wolffd@0: Y wolffd@0: N wolffd@0: N wolffd@0: N wolffd@0: wolffd@0: wolffd@0:
softmax wolffd@0: D wolffd@0: C/D wolffd@0: - wolffd@0: N wolffd@0: Y wolffd@0: Inherits from discrete wolffd@0: Inherits from discrete wolffd@0: Inherits from discrete wolffd@0: Y wolffd@0: N wolffd@0: N wolffd@0: wolffd@0: wolffd@0:
generic wolffd@0: C/D wolffd@0: C/D wolffd@0: Virtual class wolffd@0: N wolffd@0: N wolffd@0: N wolffd@0: N wolffd@0: N wolffd@0: N wolffd@0: N wolffd@0: N wolffd@0: wolffd@0: wolffd@0:
Tabular wolffd@0: D wolffd@0: D wolffd@0: - wolffd@0: Y wolffd@0: Inherits from discrete wolffd@0: Inherits from discrete wolffd@0: Inherits from discrete wolffd@0: Inherits from discrete wolffd@0: Y wolffd@0: Y wolffd@0: Y wolffd@0: wolffd@0:
wolffd@0: wolffd@0: wolffd@0: wolffd@0:

Example models

wolffd@0: wolffd@0: wolffd@0:

Gaussian mixture models

wolffd@0: wolffd@0: Richard W. DeVaul has made a detailed tutorial on how to fit mixtures wolffd@0: of Gaussians using BNT. Available wolffd@0:
here. wolffd@0: wolffd@0: wolffd@0:

PCA, ICA, and all that

wolffd@0: wolffd@0: In Figure (a) below, we show how Factor Analysis can be thought of as a wolffd@0: graphical model. Here, X has an N(0,I) prior, and wolffd@0: Y|X=x ~ N(mu + Wx, Psi), wolffd@0: where Psi is diagonal and W is called the "factor loading matrix". wolffd@0: Since the noise on both X and Y is diagonal, the components of these wolffd@0: vectors are uncorrelated, and hence can be represented as individual wolffd@0: scalar nodes, as we show in (b). wolffd@0: (This is useful if parts of the observations on the Y vector are occasionally missing.) wolffd@0: We usually take k=|X| << |Y|=D, so the model tries to explain wolffd@0: many observations using a low-dimensional subspace. wolffd@0: wolffd@0: wolffd@0:
wolffd@0: wolffd@0: wolffd@0: wolffd@0:
wolffd@0: wolffd@0: wolffd@0: wolffd@0:
(a) wolffd@0: (b) wolffd@0: (c) wolffd@0: (d) wolffd@0:
wolffd@0:
wolffd@0: wolffd@0:

wolffd@0: We can create this model in BNT as follows. wolffd@0:

wolffd@0: ns = [k D];
wolffd@0: dag = zeros(2,2);
wolffd@0: dag(1,2) = 1;
wolffd@0: bnet = mk_bnet(dag, ns, 'discrete', []);
wolffd@0: bnet.CPD{1} = gaussian_CPD(bnet, 1, 'mean', zeros(k,1), 'cov', eye(k), ...
wolffd@0:    'cov_type', 'diag', 'clamp_mean', 1, 'clamp_cov', 1);
wolffd@0: bnet.CPD{2} = gaussian_CPD(bnet, 2, 'mean', zeros(D,1), 'cov', diag(Psi0), 'weights', W0, ...
wolffd@0:    'cov_type', 'diag', 'clamp_mean', 1);
wolffd@0: 
wolffd@0: wolffd@0: The root node is clamped to the N(0,I) distribution, so that we will wolffd@0: not update these parameters during learning. wolffd@0: The mean of the leaf node is clamped to 0, wolffd@0: since we assume the data has been centered (had its mean subtracted wolffd@0: off); this is just for simplicity. wolffd@0: Finally, the covariance of the leaf node is constrained to be wolffd@0: diagonal. W0 and Psi0 are the initial parameter guesses. wolffd@0: wolffd@0:

wolffd@0: We can fit this model (i.e., estimate its parameters in a maximum wolffd@0: likelihood (ML) sense) using EM, as we wolffd@0: explain below. wolffd@0: Not surprisingly, the ML estimates for mu and Psi turn out to be wolffd@0: identical to the wolffd@0: sample mean and variance, which can be computed directly as wolffd@0:

wolffd@0: mu_ML = mean(data);
wolffd@0: Psi_ML = diag(cov(data));
wolffd@0: 
wolffd@0: Note that W can only be identified up to a rotation matrix, because of wolffd@0: the spherical symmetry of the source. wolffd@0: wolffd@0:

wolffd@0: If we restrict Psi to be spherical, i.e., Psi = sigma*I, wolffd@0: there is a closed-form solution for W as well, wolffd@0: i.e., we do not need to use EM. wolffd@0: In particular, W contains the first |X| eigenvectors of the sample covariance wolffd@0: matrix, with scalings determined by the eigenvalues and sigma. wolffd@0: Classical PCA can be obtained by taking the sigma->0 limit. wolffd@0: For details, see wolffd@0: wolffd@0:

wolffd@0: wolffd@0:

wolffd@0: By adding a hidden discrete variable, we can create mixtures of FA wolffd@0: models, as shown in (c). wolffd@0: Now we can explain the data using a set of subspaces. wolffd@0: We can create this model in BNT as follows. wolffd@0:

wolffd@0: ns = [M k D];
wolffd@0: dag = zeros(3);
wolffd@0: dag(1,3) = 1;
wolffd@0: dag(2,3) = 1;
wolffd@0: bnet = mk_bnet(dag, ns, 'discrete', 1);
wolffd@0: bnet.CPD{1} = tabular_CPD(bnet, 1, Pi0);
wolffd@0: bnet.CPD{2} = gaussian_CPD(bnet, 2, 'mean', zeros(k, 1), 'cov', eye(k), 'cov_type', 'diag', ...
wolffd@0: 			   'clamp_mean', 1, 'clamp_cov', 1);
wolffd@0: bnet.CPD{3} = gaussian_CPD(bnet, 3, 'mean', Mu0', 'cov', repmat(diag(Psi0), [1 1 M]), ...
wolffd@0: 			   'weights', W0, 'cov_type', 'diag', 'tied_cov', 1);
wolffd@0: 
wolffd@0: Notice how the covariance matrix for Y is the same for all values of wolffd@0: Q; that is, the noise level in each sub-space is assumed the same. wolffd@0: However, we allow the offset, mu, to vary. wolffd@0: For details, see wolffd@0: wolffd@0: wolffd@0:

wolffd@0: I have included Zoubin's specialized MFA code (with his permission) wolffd@0: with the toolbox, so you can check that BNT gives the same results: wolffd@0: see 'BNT/examples/static/mfa1.m'. wolffd@0: wolffd@0:

wolffd@0: Independent Factor Analysis (IFA) generalizes FA by allowing a wolffd@0: non-Gaussian prior on each component of X. wolffd@0: (Note that we can approximate a non-Gaussian prior using a mixture of wolffd@0: Gaussians.) wolffd@0: This means that the likelihood function is no longer rotationally wolffd@0: invariant, so we can uniquely identify W and the hidden wolffd@0: sources X. wolffd@0: IFA also allows a non-diagonal Psi (i.e. correlations between the components of Y). wolffd@0: We recover classical Independent Components Analysis (ICA) wolffd@0: in the Psi -> 0 limit, and by assuming that |X|=|Y|, so that the wolffd@0: weight matrix W is square and invertible. wolffd@0: For details, see wolffd@0:

wolffd@0: wolffd@0: wolffd@0: wolffd@0:

Mixtures of experts

wolffd@0: wolffd@0: As an example of the use of the softmax function, wolffd@0: we introduce the Mixture of Experts model. wolffd@0: wolffd@0: As before, wolffd@0: circles denote continuous-valued nodes, wolffd@0: squares denote discrete nodes, clear wolffd@0: means hidden, and shaded means observed. wolffd@0:

wolffd@0:

wolffd@0: wolffd@0: wolffd@0:
wolffd@0: wolffd@0:
wolffd@0:
wolffd@0:

wolffd@0: X is the observed wolffd@0: input, Y is the output, and wolffd@0: the Q nodes are hidden "gating" nodes, which select the appropriate wolffd@0: set of parameters for Y. During training, Y is assumed observed, wolffd@0: but for testing, the goal is to predict Y given X. wolffd@0: Note that this is a conditional density model, so we don't wolffd@0: associate any parameters with X. wolffd@0: Hence X's CPD will be a root CPD, which is a way of modelling wolffd@0: exogenous nodes. wolffd@0: If the output is a continuous-valued quantity, wolffd@0: we assume the "experts" are linear-regression units, wolffd@0: and set Y's CPD to linear-Gaussian. wolffd@0: If the output is discrete, we set Y's CPD to a softmax function. wolffd@0: The Q CPDs will always be softmax functions. wolffd@0: wolffd@0:

wolffd@0: As a concrete example, consider the mixture of experts model where X and Y are wolffd@0: scalars, and Q is binary. wolffd@0: This is just piecewise linear regression, where wolffd@0: we have two line segments, i.e., wolffd@0:

wolffd@0: wolffd@0:

wolffd@0: We can create this model with random parameters as follows. wolffd@0: (This code is bundled in BNT/examples/static/mixexp2.m.) wolffd@0:

wolffd@0: X = 1;
wolffd@0: Q = 2;
wolffd@0: Y = 3;
wolffd@0: dag = zeros(3,3);
wolffd@0: dag(X,[Q Y]) = 1
wolffd@0: dag(Q,Y) = 1;
wolffd@0: ns = [1 2 1]; % make X and Y scalars, and have 2 experts
wolffd@0: onodes = [1 3];
wolffd@0: bnet = mk_bnet(dag, ns, 'discrete', 2, 'observed', onodes);
wolffd@0: 
wolffd@0: rand('state', 0);
wolffd@0: randn('state', 0);
wolffd@0: bnet.CPD{1} = root_CPD(bnet, 1);
wolffd@0: bnet.CPD{2} = softmax_CPD(bnet, 2);
wolffd@0: bnet.CPD{3} = gaussian_CPD(bnet, 3);
wolffd@0: 
wolffd@0: Now let us fit this model using
EM. wolffd@0: First we load the data (1000 training cases) and plot them. wolffd@0:

wolffd@0:

wolffd@0: data = load('/examples/static/Misc/mixexp_data.txt', '-ascii');        
wolffd@0: plot(data(:,1), data(:,2), '.');
wolffd@0: 
wolffd@0:

wolffd@0:

wolffd@0: wolffd@0:
wolffd@0:

wolffd@0: This is what the model looks like before training. wolffd@0: (Thanks to Thomas Hofman for writing this plotting routine.) wolffd@0:

wolffd@0:

wolffd@0: wolffd@0:
wolffd@0:

wolffd@0: Now let's train the model, and plot the final performance. wolffd@0: (We will discuss how to train models in more detail below.) wolffd@0:

wolffd@0:

wolffd@0: ncases = size(data, 1); % each row of data is a training case
wolffd@0: cases = cell(3, ncases);
wolffd@0: cases([1 3], :) = num2cell(data'); % each column of cases is a training case
wolffd@0: engine = jtree_inf_engine(bnet);
wolffd@0: max_iter = 20;
wolffd@0: [bnet2, LLtrace] = learn_params_em(engine, cases, max_iter);
wolffd@0: 
wolffd@0: (We specify which nodes will be observed when we create the engine. wolffd@0: Hence BNT knows that the hidden nodes are all discrete. wolffd@0: For complex models, this can lead to a significant speedup.) wolffd@0: Below we show what the model looks like after 16 iterations of EM wolffd@0: (with 100 IRLS iterations per M step), when it converged wolffd@0: using the default convergence tolerance (that the wolffd@0: fractional change in the log-likelihood be less than 1e-3). wolffd@0: Before learning, the log-likelihood was wolffd@0: -322.927442; afterwards, it was -13.728778. wolffd@0:

wolffd@0:

wolffd@0: wolffd@0:
wolffd@0: (See BNT/examples/static/mixexp2.m for details of the code.) wolffd@0: wolffd@0: wolffd@0: wolffd@0:

Hierarchical mixtures of experts

wolffd@0: wolffd@0: A hierarchical mixture of experts (HME) extends the mixture of experts wolffd@0: model by having more than one hidden node. A two-level example is shown below, along wolffd@0: with its more traditional representation as a neural network. wolffd@0: This is like a (balanced) probabilistic decision tree of height 2. wolffd@0:

wolffd@0:

wolffd@0: wolffd@0:
wolffd@0:

wolffd@0: Pierpaolo Brutti wolffd@0: has written an extensive set of routines for HMEs, wolffd@0: which are bundled with BNT: see the examples/static/HME directory. wolffd@0: These routines allow you to choose the number of hidden (gating) wolffd@0: layers, and the form of the experts (softmax or MLP). wolffd@0: See the file hmemenu, which provides a demo. wolffd@0: For example, the figure below shows the decision boundaries learned wolffd@0: for a ternary classification problem, using a 2 level HME with softmax wolffd@0: gates and softmax experts; the training set is on the left, the wolffd@0: testing set on the right. wolffd@0:

wolffd@0:

wolffd@0: wolffd@0: wolffd@0:
wolffd@0:

wolffd@0: wolffd@0: wolffd@0:

wolffd@0: For more details, see the following: wolffd@0:

wolffd@0: wolffd@0: wolffd@0:

QMR

wolffd@0: wolffd@0: Bayes nets originally arose out of an attempt to add probabilities to wolffd@0: expert systems, and this is still the most common use for BNs. wolffd@0: A famous example is wolffd@0: QMR-DT, a decision-theoretic reformulation of the Quick Medical wolffd@0: Reference (QMR) model. wolffd@0:

wolffd@0:

wolffd@0: wolffd@0:
wolffd@0: Here, the top layer represents hidden disease nodes, and the bottom wolffd@0: layer represents observed symptom nodes. wolffd@0: The goal is to infer the posterior probability of each disease given wolffd@0: all the symptoms (which can be present, absent or unknown). wolffd@0: Each node in the top layer has a Bernoulli prior (with a low prior wolffd@0: probability that the disease is present). wolffd@0: Since each node in the bottom layer has a high fan-in, we use a wolffd@0: noisy-OR parameterization; each disease has an independent chance of wolffd@0: causing each symptom. wolffd@0: The real QMR-DT model is copyright, but wolffd@0: we can create a random QMR-like model as follows. wolffd@0:
wolffd@0: function bnet = mk_qmr_bnet(G, inhibit, leak, prior)
wolffd@0: % MK_QMR_BNET Make a QMR model
wolffd@0: % bnet = mk_qmr_bnet(G, inhibit, leak, prior)
wolffd@0: %
wolffd@0: % G(i,j) = 1 iff there is an arc from disease i to finding j
wolffd@0: % inhibit(i,j) = inhibition probability on i->j arc
wolffd@0: % leak(j) = inhibition prob. on leak->j arc
wolffd@0: % prior(i) = prob. disease i is on
wolffd@0: 
wolffd@0: [Ndiseases Nfindings] = size(inhibit);
wolffd@0: N = Ndiseases + Nfindings;
wolffd@0: finding_node = Ndiseases+1:N;
wolffd@0: ns = 2*ones(1,N);
wolffd@0: dag = zeros(N,N);
wolffd@0: dag(1:Ndiseases, finding_node) = G;
wolffd@0: bnet = mk_bnet(dag, ns, 'observed', finding_node);
wolffd@0: 
wolffd@0: for d=1:Ndiseases
wolffd@0:   CPT = [1-prior(d) prior(d)];
wolffd@0:   bnet.CPD{d} = tabular_CPD(bnet, d, CPT');
wolffd@0: end
wolffd@0: 
wolffd@0: for i=1:Nfindings
wolffd@0:   fnode = finding_node(i);
wolffd@0:   ps = parents(G, i);
wolffd@0:   bnet.CPD{fnode} = noisyor_CPD(bnet, fnode, leak(i), inhibit(ps, i));
wolffd@0: end
wolffd@0: 
wolffd@0: In the file BNT/examples/static/qmr1, we create a random bipartite wolffd@0: graph G, with 5 diseases and 10 findings, and random parameters. wolffd@0: (In general, to create a random dag, use 'mk_random_dag'.) wolffd@0: We can visualize the resulting graph structure using wolffd@0: the methods discussed
below, with the wolffd@0: following results: wolffd@0:

wolffd@0: wolffd@0: wolffd@0:

wolffd@0: Now let us put some random evidence on all the leaves except the very wolffd@0: first and very last, and compute the disease posteriors. wolffd@0:

wolffd@0: pos = 2:floor(Nfindings/2);
wolffd@0: neg = (pos(end)+1):(Nfindings-1);
wolffd@0: onodes = myunion(pos, neg);
wolffd@0: evidence = cell(1, N);
wolffd@0: evidence(findings(pos)) = num2cell(repmat(2, 1, length(pos)));
wolffd@0: evidence(findings(neg)) = num2cell(repmat(1, 1, length(neg)));
wolffd@0: 
wolffd@0: engine = jtree_inf_engine(bnet);
wolffd@0: [engine, ll] = enter_evidence(engine, evidence);
wolffd@0: post = zeros(1, Ndiseases);
wolffd@0: for i=diseases(:)'
wolffd@0:   m = marginal_nodes(engine, i);
wolffd@0:   post(i) = m.T(2);
wolffd@0: end
wolffd@0: 
wolffd@0: Junction tree can be quite slow on large QMR models. wolffd@0: Fortunately, it is possible to exploit properties of the noisy-OR wolffd@0: function to speed up exact inference using an algorithm called wolffd@0: quickscore, discussed below. wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0:

Conditional Gaussian models

wolffd@0: wolffd@0: A conditional Gaussian model is one in which, conditioned on all the discrete wolffd@0: nodes, the distribution over the remaining (continuous) nodes is wolffd@0: multivariate Gaussian. This means we can have arcs from discrete (D) wolffd@0: to continuous (C) nodes, but not vice versa. wolffd@0: (We are allowed C->D arcs if the continuous nodes are observed, wolffd@0: as in the mixture of experts model, wolffd@0: since this distribution can be represented with a discrete potential.) wolffd@0:

wolffd@0: We now give an example of a CG model, from wolffd@0: the paper "Propagation of Probabilities, Means amd wolffd@0: Variances in Mixed Graphical Association Models", Steffen Lauritzen, wolffd@0: JASA 87(420):1098--1108, 1992 (reprinted in the book "Probabilistic Networks and Expert wolffd@0: Systems", R. G. Cowell, A. P. Dawid, S. L. Lauritzen and wolffd@0: D. J. Spiegelhalter, Springer, 1999.) wolffd@0: wolffd@0:

Specifying the graph

wolffd@0: wolffd@0: Consider the model of waste emissions from an incinerator plant shown below. wolffd@0: We follow the standard convention that shaded nodes are observed, wolffd@0: clear nodes are hidden. wolffd@0: We also use the non-standard convention that wolffd@0: square nodes are discrete (tabular) and round nodes are wolffd@0: Gaussian. wolffd@0: wolffd@0:

wolffd@0:

wolffd@0: wolffd@0:
wolffd@0:

wolffd@0: wolffd@0: We can create this model as follows. wolffd@0:

wolffd@0: F = 1; W = 2; E = 3; B = 4; C = 5; D = 6; Min = 7; Mout = 8; L = 9;
wolffd@0: n = 9;
wolffd@0: 
wolffd@0: dag = zeros(n);
wolffd@0: dag(F,E)=1;
wolffd@0: dag(W,[E Min D]) = 1;
wolffd@0: dag(E,D)=1;
wolffd@0: dag(B,[C D])=1;
wolffd@0: dag(D,[L Mout])=1;
wolffd@0: dag(Min,Mout)=1;
wolffd@0: 
wolffd@0: % node sizes - all cts nodes are scalar, all discrete nodes are binary
wolffd@0: ns = ones(1, n);
wolffd@0: dnodes = [F W B];
wolffd@0: cnodes = mysetdiff(1:n, dnodes);
wolffd@0: ns(dnodes) = 2;
wolffd@0: 
wolffd@0: bnet = mk_bnet(dag, ns, 'discrete', dnodes);
wolffd@0: 
wolffd@0: 'dnodes' is a list of the discrete nodes; 'cnodes' is the continuous wolffd@0: nodes. 'mysetdiff' is a faster version of the built-in 'setdiff'. wolffd@0:

wolffd@0: wolffd@0: wolffd@0:

Specifying the parameters

wolffd@0: wolffd@0: The parameters of the discrete nodes can be specified as follows. wolffd@0:
wolffd@0: bnet.CPD{B} = tabular_CPD(bnet, B, 'CPT', [0.85 0.15]); % 1=stable, 2=unstable
wolffd@0: bnet.CPD{F} = tabular_CPD(bnet, F, 'CPT', [0.95 0.05]); % 1=intact, 2=defect
wolffd@0: bnet.CPD{W} = tabular_CPD(bnet, W, 'CPT', [2/7 5/7]); % 1=industrial, 2=household
wolffd@0: 
wolffd@0: wolffd@0:

wolffd@0: The parameters of the continuous nodes can be specified as follows. wolffd@0:

wolffd@0: bnet.CPD{E} = gaussian_CPD(bnet, E, 'mean', [-3.9 -0.4 -3.2 -0.5], ...
wolffd@0: 			   'cov', [0.00002 0.0001 0.00002 0.0001]);
wolffd@0: bnet.CPD{D} = gaussian_CPD(bnet, D, 'mean', [6.5 6.0 7.5 7.0], ...
wolffd@0: 			   'cov', [0.03 0.04 0.1 0.1], 'weights', [1 1 1 1]);
wolffd@0: bnet.CPD{C} = gaussian_CPD(bnet, C, 'mean', [-2 -1], 'cov', [0.1 0.3]);
wolffd@0: bnet.CPD{L} = gaussian_CPD(bnet, L, 'mean', 3, 'cov', 0.25, 'weights', -0.5);
wolffd@0: bnet.CPD{Min} = gaussian_CPD(bnet, Min, 'mean', [0.5 -0.5], 'cov', [0.01 0.005]);
wolffd@0: bnet.CPD{Mout} = gaussian_CPD(bnet, Mout, 'mean', 0, 'cov', 0.002, 'weights', [1 1]);
wolffd@0: 
wolffd@0: wolffd@0: wolffd@0:

Inference

wolffd@0: wolffd@0: wolffd@0: First we compute the unconditional marginals. wolffd@0:
wolffd@0: engine = jtree_inf_engine(bnet);
wolffd@0: evidence = cell(1,n);
wolffd@0: [engine, ll] = enter_evidence(engine, evidence);
wolffd@0: marg = marginal_nodes(engine, E);
wolffd@0: 
wolffd@0: wolffd@0: 'marg' is a structure that contains the fields 'mu' and 'Sigma', which wolffd@0: contain the mean and (co)variance of the marginal on E. wolffd@0: In this case, they are both scalars. wolffd@0: Let us check they match the published figures (to 2 decimal places). wolffd@0: wolffd@0:
wolffd@0: tol = 1e-2;
wolffd@0: assert(approxeq(marg.mu, -3.25, tol));
wolffd@0: assert(approxeq(sqrt(marg.Sigma), 0.709, tol));
wolffd@0: 
wolffd@0: We can compute the other posteriors similarly. wolffd@0: Now let us add some evidence. wolffd@0:
wolffd@0: evidence = cell(1,n);
wolffd@0: evidence{W} = 1; % industrial
wolffd@0: evidence{L} = 1.1;
wolffd@0: evidence{C} = -0.9;
wolffd@0: [engine, ll] = enter_evidence(engine, evidence);
wolffd@0: 
wolffd@0: Now we find wolffd@0:
wolffd@0: marg = marginal_nodes(engine, E);
wolffd@0: assert(approxeq(marg.mu, -3.8983, tol));
wolffd@0: assert(approxeq(sqrt(marg.Sigma), 0.0763, tol));
wolffd@0: 
wolffd@0: wolffd@0: wolffd@0: We can also compute the joint probability on a set of nodes. wolffd@0: For example, P(D, Mout | evidence) is a 2D Gaussian: wolffd@0:
wolffd@0: marg = marginal_nodes(engine, [D Mout])
wolffd@0: marg = 
wolffd@0:     domain: [6 8]
wolffd@0:         mu: [2x1 double]
wolffd@0:      Sigma: [2x2 double]
wolffd@0:          T: 1.0000
wolffd@0: 
wolffd@0: The mean is wolffd@0:
wolffd@0: marg.mu
wolffd@0: ans =
wolffd@0:     3.6077
wolffd@0:     4.1077
wolffd@0: 
wolffd@0: and the covariance matrix is wolffd@0:
wolffd@0: marg.Sigma
wolffd@0: ans =
wolffd@0:     0.1062    0.1062
wolffd@0:     0.1062    0.1182
wolffd@0: 
wolffd@0: It is easy to visualize this posterior using standard Matlab plotting wolffd@0: functions, e.g., wolffd@0:
wolffd@0: gaussplot2d(marg.mu, marg.Sigma);
wolffd@0: 
wolffd@0: produces the following picture. wolffd@0: wolffd@0:

wolffd@0:

wolffd@0: wolffd@0:
wolffd@0:

wolffd@0: wolffd@0: wolffd@0: The T field indicates that the mixing weight of this Gaussian wolffd@0: component is 1.0. wolffd@0: If the joint contains discrete and continuous variables, the result wolffd@0: will be a mixture of Gaussians, e.g., wolffd@0:

wolffd@0: marg = marginal_nodes(engine, [F E])
wolffd@0:     domain: [1 3]
wolffd@0:         mu: [-3.9000 -0.4003]
wolffd@0:      Sigma: [1x1x2 double]
wolffd@0:          T: [0.9995 4.7373e-04]
wolffd@0: 
wolffd@0: The interpretation is wolffd@0: Sigma(i,j,k) = Cov[ E(i) E(j) | F=k ]. wolffd@0: In this case, E is a scalar, so i=j=1; k specifies the mixture component. wolffd@0:

wolffd@0: We saw in the sprinkler network that BNT sets the effective size of wolffd@0: observed discrete nodes to 1, since they only have one legal value. wolffd@0: For continuous nodes, BNT sets their length to 0, wolffd@0: since they have been reduced to a point. wolffd@0: For example, wolffd@0:

wolffd@0: marg = marginal_nodes(engine, [B C])
wolffd@0:     domain: [4 5]
wolffd@0:         mu: []
wolffd@0:      Sigma: []
wolffd@0:          T: [0.0123 0.9877]
wolffd@0: 
wolffd@0: It is simple to post-process the output of marginal_nodes. wolffd@0: For example, the file BNT/examples/static/cg1 sets the mu term of wolffd@0: observed nodes to their observed value, and the Sigma term to 0 (since wolffd@0: observed nodes have no variance). wolffd@0: wolffd@0:

wolffd@0: Note that the implemented version of the junction tree is numerically wolffd@0: unstable when using CG potentials wolffd@0: (which is why, in the example above, we only required our answers to agree with wolffd@0: the published ones to 2dp.) wolffd@0: This is why you might want to use stab_cond_gauss_inf_engine, wolffd@0: implemented by Shan Huang. This is described in wolffd@0: wolffd@0:

wolffd@0: wolffd@0: However, even the numerically stable version wolffd@0: can be computationally intractable if there are many hidden discrete wolffd@0: nodes, because the number of mixture components grows exponentially e.g., in a wolffd@0:
switching linear dynamical system. wolffd@0: In general, one must resort to approximate inference techniques: see wolffd@0: the discussion on inference engines below. wolffd@0: wolffd@0: wolffd@0:

Other hybrid models

wolffd@0: wolffd@0: When we have C->D arcs, where C is hidden, we need to use wolffd@0: approximate inference. wolffd@0: One approach (not implemented in BNT) is described in wolffd@0: wolffd@0: Of course, one can always use sampling methods wolffd@0: for approximate inference in such models. wolffd@0: wolffd@0: wolffd@0: wolffd@0:

Parameter Learning

wolffd@0: wolffd@0: The parameter estimation routines in BNT can be classified into 4 wolffd@0: types, depending on whether the goal is to compute wolffd@0: a full (Bayesian) posterior over the parameters or just a point wolffd@0: estimate (e.g., Maximum Likelihood or Maximum A Posteriori), wolffd@0: and whether all the variables are fully observed or there is missing wolffd@0: data/ hidden variables (partial observability). wolffd@0:

wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0:
Full obsPartial obs
Pointlearn_paramslearn_params_em
Bayesbayes_update_paramsnot yet supported
wolffd@0: wolffd@0: wolffd@0:

Loading data from a file

wolffd@0: wolffd@0: To load numeric data from an ASCII text file called 'dat.txt', where each row is a wolffd@0: case and columns are separated by white-space, such as wolffd@0:
wolffd@0: 011979 1626.5 0.0
wolffd@0: 021979 1367.0 0.0
wolffd@0: ...
wolffd@0: 
wolffd@0: you can use wolffd@0:
wolffd@0: data = load('dat.txt');
wolffd@0: 
wolffd@0: or wolffd@0:
wolffd@0: load dat.txt -ascii
wolffd@0: 
wolffd@0: In the latter case, the data is stored in a variable called 'dat' (the wolffd@0: filename minus the extension). wolffd@0: Alternatively, suppose the data is stored in a .csv file (has commas wolffd@0: separating the columns, and contains a header line), such as wolffd@0:
wolffd@0: header info goes here
wolffd@0: ORD,011979,1626.5,0.0
wolffd@0: DSM,021979,1367.0,0.0
wolffd@0: ...
wolffd@0: 
wolffd@0: You can load this using wolffd@0:
wolffd@0: [a,b,c,d] = textread('dat.txt', '%s %d %f %f', 'delimiter', ',', 'headerlines', 1);
wolffd@0: 
wolffd@0: If your file is not in either of these formats, you can either use Perl to convert wolffd@0: it to this format, or use the Matlab scanf command. wolffd@0: Type wolffd@0: wolffd@0: help iofun wolffd@0: wolffd@0: for more information on Matlab's file functions. wolffd@0: wolffd@0:

wolffd@0: BNT learning routines require data to be stored in a cell array. wolffd@0: data{i,m} is the value of node i in case (example) m, i.e., each wolffd@0: column is a case. wolffd@0: If node i is not observed in case m (missing value), set wolffd@0: data{i,m} = []. wolffd@0: (Not all the learning routines can cope with such missing values, however.) wolffd@0: In the special case that all the nodes are observed and are wolffd@0: scalar-valued (as opposed to vector-valued), the data can be wolffd@0: stored in a matrix (as opposed to a cell-array). wolffd@0:

wolffd@0: Suppose, as in the mixture of experts example, wolffd@0: that we have 3 nodes in the graph: X(1) is the observed input, X(3) is wolffd@0: the observed output, and X(2) is a hidden (gating) node. We can wolffd@0: create the dataset as follows. wolffd@0:

wolffd@0: data = load('dat.txt');
wolffd@0: ncases = size(data, 1);
wolffd@0: cases = cell(3, ncases);
wolffd@0: cases([1 3], :) = num2cell(data');
wolffd@0: 
wolffd@0: Notice how we transposed the data, to convert rows into columns. wolffd@0: Also, cases{2,m} = [] for all m, since X(2) is always hidden. wolffd@0: wolffd@0: wolffd@0:

Maximum likelihood parameter estimation from complete data

wolffd@0: wolffd@0: As an example, let's generate some data from the sprinkler network, randomize the parameters, wolffd@0: and then try to recover the original model. wolffd@0: First we create some training data using forwards sampling. wolffd@0:
wolffd@0: samples = cell(N, nsamples);
wolffd@0: for i=1:nsamples
wolffd@0:   samples(:,i) = sample_bnet(bnet);
wolffd@0: end
wolffd@0: 
wolffd@0: samples{j,i} contains the value of the j'th node in case i. wolffd@0: sample_bnet returns a cell array because, in general, each node might wolffd@0: be a vector of different length. wolffd@0: In this case, all nodes are discrete (and hence scalars), so we wolffd@0: could have used a regular array instead (which can be quicker): wolffd@0:
wolffd@0: data = cell2num(samples);
wolffd@0: 
wolffd@0: Now we create a network with random parameters. wolffd@0: (The initial values of bnet2 don't matter in this case, since we can find the wolffd@0: globally optimal MLE independent of where we start.) wolffd@0:
wolffd@0: % Make a tabula rasa
wolffd@0: bnet2 = mk_bnet(dag, node_sizes);
wolffd@0: seed = 0;
wolffd@0: rand('state', seed);
wolffd@0: bnet2.CPD{C} = tabular_CPD(bnet2, C);
wolffd@0: bnet2.CPD{R} = tabular_CPD(bnet2, R);
wolffd@0: bnet2.CPD{S} = tabular_CPD(bnet2, S);
wolffd@0: bnet2.CPD{W} = tabular_CPD(bnet2, W);
wolffd@0: 
wolffd@0: Finally, we find the maximum likelihood estimates of the parameters. wolffd@0:
wolffd@0: bnet3 = learn_params(bnet2, samples);
wolffd@0: 
wolffd@0: To view the learned parameters, we use a little Matlab hackery. wolffd@0:
wolffd@0: CPT3 = cell(1,N);
wolffd@0: for i=1:N
wolffd@0:   s=struct(bnet3.CPD{i});  % violate object privacy
wolffd@0:   CPT3{i}=s.CPT;
wolffd@0: end
wolffd@0: 
wolffd@0: Here are the parameters learned for node 4. wolffd@0:
wolffd@0: dispcpt(CPT3{4})
wolffd@0: 1 1 : 1.0000 0.0000 
wolffd@0: 2 1 : 0.2000 0.8000 
wolffd@0: 1 2 : 0.2273 0.7727 
wolffd@0: 2 2 : 0.0000 1.0000 
wolffd@0: 
wolffd@0: So we see that the learned parameters are fairly close to the "true" wolffd@0: ones, which we display below. wolffd@0:
wolffd@0: dispcpt(CPT{4})
wolffd@0: 1 1 : 1.0000 0.0000 
wolffd@0: 2 1 : 0.1000 0.9000 
wolffd@0: 1 2 : 0.1000 0.9000 
wolffd@0: 2 2 : 0.0100 0.9900 
wolffd@0: 
wolffd@0: We can get better results by using a larger training set, or using wolffd@0: informative priors (see
below). wolffd@0: wolffd@0: wolffd@0: wolffd@0:

Parameter priors

wolffd@0: wolffd@0: Currently, only tabular CPDs can have priors on their parameters. wolffd@0: The conjugate prior for a multinomial is the Dirichlet. wolffd@0: (For binary random variables, the multinomial is the same as the wolffd@0: Bernoulli, and the Dirichlet is the same as the Beta.) wolffd@0:

wolffd@0: The Dirichlet has a simple interpretation in terms of pseudo counts. wolffd@0: If we let N_ijk = the num. times X_i=k and Pa_i=j occurs in the wolffd@0: training set, where Pa_i are the parents of X_i, wolffd@0: then the maximum likelihood (ML) estimate is wolffd@0: T_ijk = N_ijk / N_ij (where N_ij = sum_k' N_ijk'), which will be 0 if N_ijk=0. wolffd@0: To prevent us from declaring that (X_i=k, Pa_i=j) is impossible just because this wolffd@0: event was not seen in the training set, wolffd@0: we can pretend we saw value k of X_i, for each value j of Pa_i some number (alpha_ijk) wolffd@0: of times in the past. wolffd@0: The MAP (maximum a posterior) estimate is then wolffd@0:

wolffd@0: T_ijk = (N_ijk + alpha_ijk) / (N_ij + alpha_ij)
wolffd@0: 
wolffd@0: and is never 0 if all alpha_ijk > 0. wolffd@0: For example, consider the network A->B, where A is binary and B has 3 wolffd@0: values. wolffd@0: A uniform prior for B has the form wolffd@0:
wolffd@0:     B=1 B=2 B=3
wolffd@0: A=1 1   1   1
wolffd@0: A=2 1   1   1
wolffd@0: 
wolffd@0: which can be created using wolffd@0:
wolffd@0: tabular_CPD(bnet, i, 'prior_type', 'dirichlet', 'dirichlet_type', 'unif');
wolffd@0: 
wolffd@0: This prior does not satisfy the likelihood equivalence principle, wolffd@0: which says that
Markov equivalent models wolffd@0: should have the same marginal likelihood. wolffd@0: A prior that does satisfy this principle is shown below. wolffd@0: Heckerman (1995) calls this the wolffd@0: BDeu prior (likelihood equivalent uniform Bayesian Dirichlet). wolffd@0:
wolffd@0:     B=1 B=2 B=3
wolffd@0: A=1 1/6 1/6 1/6
wolffd@0: A=2 1/6 1/6 1/6
wolffd@0: 
wolffd@0: where we put N/(q*r) in each bin; N is the equivalent sample size, wolffd@0: r=|A|, q = |B|. wolffd@0: This can be created as follows wolffd@0:
wolffd@0: tabular_CPD(bnet, i, 'prior_type', 'dirichlet', 'dirichlet_type', 'BDeu');
wolffd@0: 
wolffd@0: Here, 1 is the equivalent sample size, and is the strength of the wolffd@0: prior. wolffd@0: You can change this using wolffd@0:
wolffd@0: tabular_CPD(bnet, i, 'prior_type', 'dirichlet', 'dirichlet_type', ...
wolffd@0:    'BDeu', 'dirichlet_weight', 10);
wolffd@0: 
wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0:

(Sequential) Bayesian parameter updating from complete data

wolffd@0: wolffd@0: If we use conjugate priors and have fully observed data, we can wolffd@0: compute the posterior over the parameters in batch form as follows. wolffd@0:
wolffd@0: cases = sample_bnet(bnet, nsamples);
wolffd@0: bnet = bayes_update_params(bnet, cases);  
wolffd@0: LL = log_marg_lik_complete(bnet, cases);   
wolffd@0: 
wolffd@0: bnet.CPD{i}.prior contains the new Dirichlet pseudocounts, wolffd@0: and bnet.CPD{i}.CPT is set to the mean of the posterior (the wolffd@0: normalized counts). wolffd@0: (Hence if the initial pseudo counts are 0, wolffd@0: bayes_update_params and learn_params will give the wolffd@0: same result.) wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0:

wolffd@0: We can compute the same result sequentially (on-line) as follows. wolffd@0:

wolffd@0: LL = 0;
wolffd@0: for m=1:nsamples
wolffd@0:   LL = LL + log_marg_lik_complete(bnet, cases(:,m));
wolffd@0:   bnet = bayes_update_params(bnet, cases(:,m));
wolffd@0: end
wolffd@0: 
wolffd@0: wolffd@0: The file BNT/examples/static/StructLearn/model_select1 has an example of wolffd@0: sequential model selection which uses the same idea. wolffd@0: We generate data from the model A->B wolffd@0: and compute the posterior prob of all 3 dags on 2 nodes: wolffd@0: (1) A B, (2) A <- B , (3) A -> B wolffd@0: Models 2 and 3 are
Markov equivalent, and therefore indistinguishable from wolffd@0: observational data alone, so we expect their posteriors to be the same wolffd@0: (assuming a prior which satisfies likelihood equivalence). wolffd@0: If we use random parameters, the "true" model only gets a higher posterior after 2000 trials! wolffd@0: However, if we make B a noisy NOT gate, the true model "wins" after 12 wolffd@0: trials, as shown below (red = model 1, blue/green (superimposed) wolffd@0: represents models 2/3). wolffd@0:

wolffd@0: wolffd@0:

wolffd@0: The use of marginal likelihood for model selection is discussed in wolffd@0: greater detail in the wolffd@0: section on structure learning. wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0:

Maximum likelihood parameter estimation with missing values

wolffd@0: wolffd@0: Now we consider learning when some values are not observed. wolffd@0: Let us randomly hide half the values generated from the water wolffd@0: sprinkler example. wolffd@0:
wolffd@0: samples2 = samples;
wolffd@0: hide = rand(N, nsamples) > 0.5;
wolffd@0: [I,J]=find(hide);
wolffd@0: for k=1:length(I)
wolffd@0:   samples2{I(k), J(k)} = [];
wolffd@0: end
wolffd@0: 
wolffd@0: samples2{i,l} is the value of node i in training case l, or [] if unobserved. wolffd@0:

wolffd@0: Now we will compute the MLEs using the EM algorithm. wolffd@0: We need to use an inference algorithm to compute the expected wolffd@0: sufficient statistics in the E step; the M (maximization) step is as wolffd@0: above. wolffd@0:

wolffd@0: engine2 = jtree_inf_engine(bnet2);
wolffd@0: max_iter = 10;
wolffd@0: [bnet4, LLtrace] = learn_params_em(engine2, samples2, max_iter);
wolffd@0: 
wolffd@0: LLtrace(i) is the log-likelihood at iteration i. We can plot this as wolffd@0: follows: wolffd@0:
wolffd@0: plot(LLtrace, 'x-')
wolffd@0: 
wolffd@0: Let's display the results after 10 iterations of EM. wolffd@0:
wolffd@0: celldisp(CPT4)
wolffd@0: CPT4{1} =
wolffd@0:     0.6616
wolffd@0:     0.3384
wolffd@0: CPT4{2} =
wolffd@0:     0.6510    0.3490
wolffd@0:     0.8751    0.1249
wolffd@0: CPT4{3} =
wolffd@0:     0.8366    0.1634
wolffd@0:     0.0197    0.9803
wolffd@0: CPT4{4} =
wolffd@0: (:,:,1) =
wolffd@0:     0.8276    0.0546
wolffd@0:     0.5452    0.1658
wolffd@0: (:,:,2) =
wolffd@0:     0.1724    0.9454
wolffd@0:     0.4548    0.8342
wolffd@0: 
wolffd@0: We can get improved performance by using one or more of the following wolffd@0: methods: wolffd@0: wolffd@0: wolffd@0:

Numerical problems when fitting wolffd@0: Gaussians using EM

wolffd@0: wolffd@0: When fitting a Gaussian CPD using EM, you wolffd@0: might encounter some numerical wolffd@0: problems. See here wolffd@0: for a discussion of this in the context of HMMs. wolffd@0: wolffd@0: wolffd@0: wolffd@0:

Parameter tying

wolffd@0: wolffd@0: In networks with repeated structure (e.g., chains and grids), it is wolffd@0: common to assume that the parameters are the same at every node. This wolffd@0: is called parameter tying, and reduces the amount of data needed for wolffd@0: learning. wolffd@0:

wolffd@0: When we have tied parameters, there is no longer a one-to-one wolffd@0: correspondence between nodes and CPDs. wolffd@0: Rather, each CPD species the parameters for a whole equivalence class wolffd@0: of nodes. wolffd@0: It is easiest to see this by example. wolffd@0: Consider the following hidden Markov wolffd@0: model (HMM) wolffd@0:

wolffd@0: wolffd@0:

wolffd@0: wolffd@0: When HMMs are used for semi-infinite processes like speech recognition, wolffd@0: we assume the transition matrix wolffd@0: P(H(t+1)|H(t)) is the same for all t; this is called a time-invariant wolffd@0: or homogenous Markov chain. wolffd@0: Hence hidden nodes 2, 3, ..., T wolffd@0: are all in the same equivalence class, say class Hclass. wolffd@0: Similarly, the observation matrix P(O(t)|H(t)) is assumed to be the wolffd@0: same for all t, so the observed nodes are all in the same equivalence wolffd@0: class, say class Oclass. wolffd@0: Finally, the prior term P(H(1)) is in a class all by itself, say class wolffd@0: H1class. wolffd@0: This is illustrated below, where we explicitly represent the wolffd@0: parameters as random variables (dotted nodes). wolffd@0:

wolffd@0: wolffd@0:

wolffd@0: In BNT, we cannot represent parameters as random variables (nodes). wolffd@0: Instead, we "hide" the wolffd@0: parameters inside one CPD for each equivalence class, wolffd@0: and then specify that the other CPDs should share these parameters, as wolffd@0: follows. wolffd@0:

wolffd@0: hnodes = 1:2:2*T;
wolffd@0: onodes = 2:2:2*T;
wolffd@0: H1class = 1; Hclass = 2; Oclass = 3;
wolffd@0: eclass = ones(1,N);
wolffd@0: eclass(hnodes(2:end)) = Hclass;
wolffd@0: eclass(hnodes(1)) = H1class;
wolffd@0: eclass(onodes) = Oclass;
wolffd@0: % create dag and ns in the usual way
wolffd@0: bnet = mk_bnet(dag, ns, 'discrete', dnodes, 'equiv_class', eclass);
wolffd@0: 
wolffd@0: Finally, we define the parameters for each equivalence class: wolffd@0:
wolffd@0: bnet.CPD{H1class} = tabular_CPD(bnet, hnodes(1)); % prior
wolffd@0: bnet.CPD{Hclass} = tabular_CPD(bnet, hnodes(2)); % transition matrix
wolffd@0: if cts_obs
wolffd@0:   bnet.CPD{Oclass} = gaussian_CPD(bnet, onodes(1));
wolffd@0: else
wolffd@0:   bnet.CPD{Oclass} = tabular_CPD(bnet, onodes(1));
wolffd@0: end
wolffd@0: 
wolffd@0: In general, if bnet.CPD{e} = xxx_CPD(bnet, j), then j should be a wolffd@0: member of e's equivalence class; that is, it is not always the case wolffd@0: that e == j. You can use bnet.rep_of_eclass(e) to return the wolffd@0: representative of equivalence class e. wolffd@0: BNT will look up the parents of j to determine the size wolffd@0: of the CPT to use. It assumes that this is the same for all members of wolffd@0: the equivalence class. wolffd@0: Click here for wolffd@0: a more complex example of parameter tying. wolffd@0:

wolffd@0: Note: wolffd@0: Normally one would define an HMM as a wolffd@0: Dynamic Bayes Net wolffd@0: (see the function BNT/examples/dynamic/mk_chmm.m). wolffd@0: However, one can define an HMM as a static BN using the function wolffd@0: BNT/examples/static/Models/mk_hmm_bnet.m. wolffd@0: wolffd@0: wolffd@0: wolffd@0:

Structure learning

wolffd@0: wolffd@0: Update (9/29/03): wolffd@0: Phillipe LeRay is developing some additional structure learning code wolffd@0: on top of BNT. Click here wolffd@0: for details. wolffd@0: wolffd@0:

wolffd@0: wolffd@0: There are two very different approaches to structure learning: wolffd@0: constraint-based and search-and-score. wolffd@0: In the constraint-based approach, wolffd@0: we start with a fully connected graph, and remove edges if certain wolffd@0: conditional independencies are measured in the data. wolffd@0: This has the disadvantage that repeated independence tests lose wolffd@0: statistical power. wolffd@0:

wolffd@0: In the more popular search-and-score approach, wolffd@0: we perform a search through the space of possible DAGs, and either wolffd@0: return the best one found (a point estimate), or return a sample of the wolffd@0: models found (an approximation to the Bayesian posterior). wolffd@0:

wolffd@0: Unfortunately, the number of DAGs as a function of the number of wolffd@0: nodes, G(n), is super-exponential in n. wolffd@0: A closed form formula for G(n) is not known, but the first few values wolffd@0: are shown below (from Cooper, 1999). wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0:
n G(n)
1 1
2 3
3 25
4 543
5 29,281
6 3,781,503
7 1.1 x 10^9
8 7.8 x 10^11
9 1.2 x 10^15
10 4.2 x 10^18
wolffd@0: wolffd@0: Since the number of DAGs is super-exponential in the number of nodes, wolffd@0: we cannot exhaustively search the space, so we either use a local wolffd@0: search algorithm (e.g., greedy hill climbining, perhaps with multiple wolffd@0: restarts) or a global search algorithm (e.g., Markov Chain Monte wolffd@0: Carlo). wolffd@0:

wolffd@0: If we know a total ordering on the nodes, wolffd@0: finding the best structure amounts to picking the best set of parents wolffd@0: for each node independently. wolffd@0: This is what the K2 algorithm does. wolffd@0: If the ordering is unknown, we can search over orderings, wolffd@0: which is more efficient than searching over DAGs (Koller and Friedman, 2000). wolffd@0:

wolffd@0: In addition to the search procedure, we must specify the scoring wolffd@0: function. There are two popular choices. The Bayesian score integrates wolffd@0: out the parameters, i.e., it is the marginal likelihood of the model. wolffd@0: The BIC (Bayesian Information Criterion) is defined as wolffd@0: log P(D|theta_hat) - 0.5*d*log(N), where D is the data, theta_hat is wolffd@0: the ML estimate of the parameters, d is the number of parameters, and wolffd@0: N is the number of data cases. wolffd@0: The BIC method has the advantage of not requiring a prior. wolffd@0:

wolffd@0: BIC can be derived as a large sample wolffd@0: approximation to the marginal likelihood. wolffd@0: (It is also equal to the Minimum Description Length of a model.) wolffd@0: However, in practice, the sample size does not need to be very large wolffd@0: for the approximation to be good. wolffd@0: For example, in the figure below, we plot the ratio between the log marginal likelihood wolffd@0: and the BIC score against data-set size; we see that the ratio rapidly wolffd@0: approaches 1, especially for non-informative priors. wolffd@0: (This plot was generated by the file BNT/examples/static/bic1.m. It wolffd@0: uses the water sprinkler BN with BDeu Dirichlet priors with different wolffd@0: equivalent sample sizes.) wolffd@0: wolffd@0:

wolffd@0:

wolffd@0: wolffd@0:
wolffd@0:

wolffd@0: wolffd@0:

wolffd@0: As with parameter learning, handling missing data/ hidden variables is wolffd@0: much harder than the fully observed case. wolffd@0: The structure learning routines in BNT can therefore be classified into 4 wolffd@0: types, analogously to the parameter learning case. wolffd@0:

wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0:
Full obsPartial obs
Pointlearn_struct_K2
wolffd@0: wolffd@0:
not yet supported
Bayeslearn_struct_mcmcnot yet supported
wolffd@0: wolffd@0: wolffd@0:

Markov equivalence

wolffd@0: wolffd@0: If two DAGs encode the same conditional independencies, they are wolffd@0: called Markov equivalent. The set of all DAGs can be paritioned into wolffd@0: Markov equivalence classes. Graphs within the same class can wolffd@0: have wolffd@0: the direction of some of their arcs reversed without changing any of wolffd@0: the CI relationships. wolffd@0: Each class can be represented by a PDAG wolffd@0: (partially directed acyclic graph) called an essential graph or wolffd@0: pattern. This specifies which edges must be oriented in a certain wolffd@0: direction, and which may be reversed. wolffd@0: wolffd@0:

wolffd@0: When learning graph structure from observational data, wolffd@0: the best one can hope to do is to identify the model up to Markov wolffd@0: equivalence. To distinguish amongst graphs within the same equivalence wolffd@0: class, one needs interventional data: see the discussion on active learning below. wolffd@0: wolffd@0: wolffd@0: wolffd@0:

Exhaustive search

wolffd@0: wolffd@0: The brute-force approach to structure learning is to enumerate all wolffd@0: possible DAGs, and score each one. This provides a "gold standard" wolffd@0: with which to compare other algorithms. We can do this as follows. wolffd@0:
wolffd@0: dags = mk_all_dags(N);
wolffd@0: score = score_dags(data, ns, dags);
wolffd@0: 
wolffd@0: where data(i,m) is the value of node i in case m, wolffd@0: and ns(i) is the size of node i. wolffd@0: If the DAGs have a lot of families in common, we can cache the sufficient statistics, wolffd@0: making this potentially more efficient than scoring the DAGs one at a time. wolffd@0: (Caching is not currently implemented, however.) wolffd@0:

wolffd@0: By default, we use the Bayesian scoring metric, and assume CPDs are wolffd@0: represented by tables with BDeu(1) priors. wolffd@0: We can override these defaults as follows. wolffd@0: If we want to use uniform priors, we can say wolffd@0:

wolffd@0: params = cell(1,N);
wolffd@0: for i=1:N
wolffd@0:   params{i} = {'prior', 'unif'};
wolffd@0: end
wolffd@0: score = score_dags(data, ns, dags, 'params', params);
wolffd@0: 
wolffd@0: params{i} is a cell-array, containing optional arguments that are wolffd@0: passed to the constructor for CPD i. wolffd@0:

wolffd@0: Now suppose we want to use different node types, e.g., wolffd@0: Suppose nodes 1 and 2 are Gaussian, and nodes 3 and 4 softmax (both wolffd@0: these CPDs can support discrete and continuous parents, which is wolffd@0: necessary since all other nodes will be considered as parents). wolffd@0: The Bayesian scoring metric currently only works for tabular CPDs, so wolffd@0: we will use BIC: wolffd@0:

wolffd@0: score = score_dags(data, ns, dags, 'discrete', [3 4], 'params', [], 
wolffd@0:     'type', {'gaussian', 'gaussian', 'softmax', softmax'}, 'scoring_fn', 'bic')
wolffd@0: 
wolffd@0: In practice, one can't enumerate all possible DAGs for N > 5, wolffd@0: but one can evaluate any reasonably-sized set of hypotheses in this wolffd@0: way (e.g., nearest neighbors of your current best guess). wolffd@0: Think of this as "computer assisted model refinement" as opposed to de wolffd@0: novo learning. wolffd@0: wolffd@0: wolffd@0:

K2

wolffd@0: wolffd@0: The K2 algorithm (Cooper and Herskovits, 1992) is a greedy search algorithm that works as follows. wolffd@0: Initially each node has no parents. It then adds incrementally that parent whose addition most wolffd@0: increases the score of the resulting structure. When the addition of no single wolffd@0: parent can increase the score, it stops adding parents to the node. wolffd@0: Since we are using a fixed ordering, we do not need to check for wolffd@0: cycles, and can choose the parents for each node independently. wolffd@0:

wolffd@0: The original paper used the Bayesian scoring wolffd@0: metric with tabular CPDs and Dirichlet priors. wolffd@0: BNT generalizes this to allow any kind of CPD, and either the Bayesian wolffd@0: scoring metric or BIC, as in the example above. wolffd@0: In addition, you can specify wolffd@0: an optional upper bound on the number of parents for each node. wolffd@0: The file BNT/examples/static/k2demo1.m gives an example of how to use K2. wolffd@0: We use the water sprinkler network and sample 100 cases from it as before. wolffd@0: Then we see how much data it takes to recover the generating structure: wolffd@0:

wolffd@0: order = [C S R W];
wolffd@0: max_fan_in = 2;
wolffd@0: sz = 5:5:100;
wolffd@0: for i=1:length(sz)
wolffd@0:   dag2 = learn_struct_K2(data(:,1:sz(i)), node_sizes, order, 'max_fan_in', max_fan_in);
wolffd@0:   correct(i) = isequal(dag, dag2);
wolffd@0: end
wolffd@0: 
wolffd@0: Here are the results. wolffd@0:
wolffd@0: correct =
wolffd@0:   Columns 1 through 12 
wolffd@0:      0     0     0     0     0     0     0     1     0     1     1     1
wolffd@0:   Columns 13 through 20 
wolffd@0:      1     1     1     1     1     1     1     1
wolffd@0: 
wolffd@0: So we see it takes about sz(10)=50 cases. (BIC behaves similarly, wolffd@0: showing that the prior doesn't matter too much.) wolffd@0: In general, we cannot hope to recover the "true" generating structure, wolffd@0: only one that is in its Markov equivalence wolffd@0: class. wolffd@0: wolffd@0: wolffd@0:

Hill-climbing

wolffd@0: wolffd@0: Hill-climbing starts at a specific point in space, wolffd@0: considers all nearest neighbors, and moves to the neighbor wolffd@0: that has the highest score; if no neighbors have higher wolffd@0: score than the current point (i.e., we have reached a local maximum), wolffd@0: the algorithm stops. One can then restart in another part of the space. wolffd@0:

wolffd@0: A common definition of "neighbor" is all graphs that can be wolffd@0: generated from the current graph by adding, deleting or reversing a wolffd@0: single arc, subject to the acyclicity constraint. wolffd@0: Other neighborhoods are possible: see wolffd@0: wolffd@0: Optimal Structure Identification with Greedy Search, Max wolffd@0: Chickering, JMLR 2002. wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0:

MCMC

wolffd@0: wolffd@0: We can use a Markov Chain Monte Carlo (MCMC) algorithm called wolffd@0: Metropolis-Hastings (MH) to search the space of all wolffd@0: DAGs. wolffd@0: The standard proposal distribution is to consider moving to all wolffd@0: nearest neighbors in the sense defined above. wolffd@0:

wolffd@0: The function can be called wolffd@0: as in the following example. wolffd@0:

wolffd@0: [sampled_graphs, accept_ratio] = learn_struct_mcmc(data, ns, 'nsamples', 100, 'burnin', 10);
wolffd@0: 
wolffd@0: We can convert our set of sampled graphs to a histogram wolffd@0: (empirical posterior over all the DAGs) thus wolffd@0:
wolffd@0: all_dags = mk_all_dags(N);
wolffd@0: mcmc_post = mcmc_sample_to_hist(sampled_graphs, all_dags);
wolffd@0: 
wolffd@0: To see how well this performs, let us compute the exact posterior exhaustively. wolffd@0:

wolffd@0:

wolffd@0: score = score_dags(data, ns, all_dags);
wolffd@0: post = normalise(exp(score)); % assuming uniform structural prior
wolffd@0: 
wolffd@0: We plot the results below. wolffd@0: (The data set was 100 samples drawn from a random 4 node bnet; see the wolffd@0: file BNT/examples/static/mcmc1.) wolffd@0:
wolffd@0: subplot(2,1,1)
wolffd@0: bar(post)
wolffd@0: subplot(2,1,2)
wolffd@0: bar(mcmc_post)
wolffd@0: 
wolffd@0: wolffd@0:

wolffd@0: We can also plot the acceptance ratio versus number of MCMC steps, wolffd@0: as a crude convergence diagnostic. wolffd@0:

wolffd@0: clf
wolffd@0: plot(accept_ratio)
wolffd@0: 
wolffd@0: wolffd@0:

wolffd@0: Better convergence diagnostics can be found wolffd@0: here wolffd@0:

wolffd@0: Even though the number of samples needed by MCMC is theoretically wolffd@0: polynomial (not exponential) in the dimensionality of the search space, in practice it has been wolffd@0: found that MCMC does not converge in reasonable time for graphs with wolffd@0: more than about 10 nodes. wolffd@0:

wolffd@0: wolffd@0: Dirk Husmeier has extended MCMC model selection to DBNs. wolffd@0: wolffd@0: wolffd@0:

Active structure learning

wolffd@0: wolffd@0: As was mentioned above, wolffd@0: one can only learn a DAG up to Markov equivalence, even given infinite data. wolffd@0: If one is interested in learning the structure of a causal network, wolffd@0: one needs interventional data. wolffd@0: (By "intervention" we mean forcing a node to take on a specific value, wolffd@0: thereby effectively severing its incoming arcs.) wolffd@0:

wolffd@0: Most of the scoring functions accept an optional argument wolffd@0: that specifies whether a node was observed to have a certain value, or wolffd@0: was forced to have that value: we set clamped(i,m)=1 if node i was wolffd@0: forced in training case m. e.g., see the file wolffd@0: BNT/examples/static/cooper_yoo. wolffd@0:

wolffd@0: An interesting question is to decide which interventions to perform wolffd@0: (c.f., design of experiments). For details, see the following tech wolffd@0: report wolffd@0:

wolffd@0: wolffd@0: wolffd@0:

Structural EM

wolffd@0: wolffd@0: Computing the Bayesian score when there is partial observability is wolffd@0: computationally challenging, because the parameter posterior becomes wolffd@0: multimodal (the hidden nodes induce a mixture distribution). wolffd@0: One therefore needs to use approximations such as BIC. wolffd@0: Unfortunately, search algorithms are still expensive, because we need wolffd@0: to run EM at each step to compute the MLE, which is needed to compute wolffd@0: the score of each model. An alternative approach is wolffd@0: to do the local search steps inside of the M step of EM, which is more wolffd@0: efficient since the data has been "filled in" - this is wolffd@0: called the structural EM algorithm (Friedman 1997), and provably wolffd@0: converges to a local maximum of the BIC score. wolffd@0:

wolffd@0: Wei Hu has implemented SEM for discrete nodes. wolffd@0: You can download his package from wolffd@0: here. wolffd@0: Please address all questions about this code to wolffd@0: wei.hu@intel.com. wolffd@0: See also Phl's implementation of SEM. wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0:

Visualizing the graph

wolffd@0: wolffd@0: You can visualize an arbitrary graph (such as one learned using the wolffd@0: structure learning routines) with Matlab code contributed by wolffd@0: Ali wolffd@0: Taylan Cemgil wolffd@0: from the University of Nijmegen. wolffd@0: For static BNs, call it as follows: wolffd@0:
wolffd@0: draw_graph(bnet.dag);
wolffd@0: 
wolffd@0: For example, this is the output produced on a wolffd@0: random QMR-like model: wolffd@0:

wolffd@0: wolffd@0:

wolffd@0: If you install the excellent graphhviz, an wolffd@0: open-source graph visualization package from AT&T, wolffd@0: you can create a much better visualization as follows wolffd@0:

wolffd@0: graph_to_dot(bnet.dag)
wolffd@0: 
wolffd@0: This works by converting the adjacency matrix to a file suitable wolffd@0: for input to graphviz (using the dot format), wolffd@0: then converting the output of graphviz to postscript, and displaying the results using wolffd@0: ghostview. wolffd@0: You can do each of these steps separately for more control, as shown wolffd@0: below. wolffd@0:
wolffd@0: graph_to_dot(bnet.dag, 'filename', 'foo.dot');
wolffd@0: dot -Tps foo.dot -o foo.ps
wolffd@0: ghostview foo.ps &
wolffd@0: 
wolffd@0: wolffd@0:

Constraint-based methods

wolffd@0: wolffd@0: The IC algorithm (Pearl and Verma, 1991), wolffd@0: and the faster, but otherwise equivalent, PC algorithm (Spirtes, Glymour, and Scheines 1993), wolffd@0: computes many conditional independence tests, wolffd@0: and combines these constraints into a wolffd@0: PDAG to represent the whole wolffd@0: Markov equivalence class. wolffd@0:

wolffd@0: IC*/FCI extend IC/PC to handle latent variables: see below. wolffd@0: (IC stands for inductive causation; PC stands for Peter and Clark, wolffd@0: the first names of Spirtes and Glymour; FCI stands for fast causal wolffd@0: inference. wolffd@0: What we, following Pearl (2000), call IC* was called wolffd@0: IC in the original Pearl and Verma paper.) wolffd@0: For details, see wolffd@0:

wolffd@0: wolffd@0:

wolffd@0: wolffd@0: The PC algorithm takes as arguments a function f, the number of nodes N, wolffd@0: the maximum fan in K, and additional arguments A which are passed to f. wolffd@0: The function f(X,Y,S,A) returns 1 if X is conditionally independent of Y given S, and 0 wolffd@0: otherwise. wolffd@0: For example, suppose we cheat by wolffd@0: passing in a CI "oracle" which has access to the true DAG; the oracle wolffd@0: tests for d-separation in this DAG, i.e., wolffd@0: f(X,Y,S) calls dsep(X,Y,S,dag). We can to this as follows. wolffd@0:

wolffd@0: pdag = learn_struct_pdag_pc('dsep', N, max_fan_in, dag);
wolffd@0: 
wolffd@0: pdag(i,j) = -1 if there is definitely an i->j arc, wolffd@0: and pdag(i,j) = 1 if there is either an i->j or and i<-j arc. wolffd@0:

wolffd@0: Applied to the sprinkler network, this returns wolffd@0:

wolffd@0: pdag =
wolffd@0:      0     1     1     0
wolffd@0:      1     0     0    -1
wolffd@0:      1     0     0    -1
wolffd@0:      0     0     0     0
wolffd@0: 
wolffd@0: So as expected, we see that the V-structure at the W node is uniquely identified, wolffd@0: but the other arcs have ambiguous orientation. wolffd@0:

wolffd@0: We now give an example from p141 (1st edn) / p103 (2nd end) of the SGS wolffd@0: book. wolffd@0: This example concerns the female orgasm. wolffd@0: We are given a correlation matrix C between 7 measured factors (such wolffd@0: as subjective experiences of coital and masturbatory experiences), wolffd@0: derived from 281 samples, and want to learn a causal model of the wolffd@0: data. We will not discuss the merits of this type of work here, but wolffd@0: merely show how to reproduce the results in the SGS book. wolffd@0: Their program, wolffd@0: Tetrad, wolffd@0: makes use of the Fisher Z-test for conditional wolffd@0: independence, so we do the same: wolffd@0:

wolffd@0: max_fan_in = 4;
wolffd@0: nsamples = 281;
wolffd@0: alpha = 0.05;
wolffd@0: pdag = learn_struct_pdag_pc('cond_indep_fisher_z', n, max_fan_in, C, nsamples, alpha);
wolffd@0: 
wolffd@0: In this case, the CI test is wolffd@0:
wolffd@0: f(X,Y,S) = cond_indep_fisher_z(X,Y,S,  C,nsamples,alpha)
wolffd@0: 
wolffd@0: The results match those of Fig 12a of SGS apart from two edge wolffd@0: differences; presumably this is due to rounding error (although it wolffd@0: could be a bug, either in BNT or in Tetrad). wolffd@0: This example can be found in the file BNT/examples/static/pc2.m. wolffd@0: wolffd@0:

wolffd@0: wolffd@0: The IC* algorithm (Pearl and Verma, 1991), wolffd@0: and the faster FCI algorithm (Spirtes, Glymour, and Scheines 1993), wolffd@0: are like the IC/PC algorithm, except that they can detect the presence wolffd@0: of latent variables. wolffd@0: See the file learn_struct_pdag_ic_star written by Tamar wolffd@0: Kushnir. The output is a matrix P, defined as follows wolffd@0: (see Pearl (2000), p52 for details): wolffd@0:

wolffd@0: % P(i,j) = -1 if there is either a latent variable L such that i <-L->j OR there is a directed edge from i->j.
wolffd@0: % P(i,j) = -2 if there is a marked directed i-*>j edge.
wolffd@0: % P(i,j) = P(j,i) = 1 if there is and undirected edge i--j
wolffd@0: % P(i,j) = P(j,i) = 2 if there is a latent variable L such that i<-L->j.
wolffd@0: 
wolffd@0: wolffd@0: wolffd@0:

Philippe Leray's structure learning package

wolffd@0: wolffd@0: Philippe Leray has written a wolffd@0: wolffd@0: structure learning package that uses BNT. wolffd@0: wolffd@0: It currently (Juen 2003) has the following features: wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0:

Inference engines

wolffd@0: wolffd@0: Up until now, we have used the junction tree algorithm for inference. wolffd@0: However, sometimes this is too slow, or not even applicable. wolffd@0: In general, there are many inference algorithms each of which make wolffd@0: different tradeoffs between speed, accuracy, complexity and wolffd@0: generality. Furthermore, there might be many implementations of the wolffd@0: same algorithm; for instance, a general purpose, readable version, wolffd@0: and a highly-optimized, specialized one. wolffd@0: To cope with this variety, we treat each inference algorithm as an wolffd@0: object, which we call an inference engine. wolffd@0: wolffd@0:

wolffd@0: An inference engine is an object that contains a bnet and supports the wolffd@0: 'enter_evidence' and 'marginal_nodes' methods. The engine constructor wolffd@0: takes the bnet as argument and may do some model-specific processing. wolffd@0: When 'enter_evidence' is called, the engine may do some wolffd@0: evidence-specific processing. Finally, when 'marginal_nodes' is wolffd@0: called, the engine may do some query-specific processing. wolffd@0: wolffd@0:

wolffd@0: The amount of work done when each stage is specified -- structure, wolffd@0: parameters, evidence, and query -- depends on the engine. The cost of wolffd@0: work done early in this sequence can be amortized. On the other hand, wolffd@0: one can make better optimizations if one waits until later in the wolffd@0: sequence. wolffd@0: For example, the parameters might imply wolffd@0: conditional indpendencies that are not evident in the graph structure, wolffd@0: but can nevertheless be exploited; the evidence indicates which nodes wolffd@0: are observed and hence can effectively be disconnected from the wolffd@0: graph; and the query might indicate that large parts of the network wolffd@0: are d-separated from the query nodes. (Since it is not the actual wolffd@0: values of the evidence that matters, just which nodes are observed, wolffd@0: many engines allow you to specify which nodes will be observed when they are constructed, wolffd@0: i.e., before calling 'enter_evidence'. Some engines can still cope if wolffd@0: the actual pattern of evidence is different, e.g., if there is missing wolffd@0: data.) wolffd@0:

wolffd@0: wolffd@0: Although being maximally lazy (i.e., only doing work when a query is wolffd@0: issued) may seem desirable, wolffd@0: this is not always the most efficient. wolffd@0: For example, wolffd@0: when learning using EM, we need to call marginal_nodes N times, where N is the wolffd@0: number of nodes. Variable elimination would end wolffd@0: up repeating a lot of work wolffd@0: each time marginal_nodes is called, making it inefficient for wolffd@0: learning. The junction tree algorithm, by contrast, uses dynamic wolffd@0: programming to avoid this redundant computation --- it calculates all wolffd@0: marginals in two passes during 'enter_evidence', so calling wolffd@0: 'marginal_nodes' takes constant time. wolffd@0:

wolffd@0: We will discuss some of the inference algorithms implemented in BNT wolffd@0: below, and finish with a summary of all wolffd@0: of them. wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0:

Variable elimination

wolffd@0: wolffd@0: The variable elimination algorithm, also known as bucket elimination wolffd@0: or peeling, is one of the simplest inference algorithms. wolffd@0: The basic idea is to "push sums inside of products"; this is explained wolffd@0: in more detail wolffd@0: here. wolffd@0:

wolffd@0: The principle of distributing sums over products can be generalized wolffd@0: greatly to apply to any commutative semiring. wolffd@0: This forms the basis of many common algorithms, such as Viterbi wolffd@0: decoding and the Fast Fourier Transform. For details, see wolffd@0: wolffd@0:

wolffd@0: wolffd@0:

wolffd@0: Choosing an order in which to sum out the variables so as to minimize wolffd@0: computational cost is known to be NP-hard. wolffd@0: The implementation of this algorithm in wolffd@0: var_elim_inf_engine makes no attempt to optimize this wolffd@0: ordering (in contrast, say, to jtree_inf_engine, which uses a wolffd@0: greedy search procedure to find a good ordering). wolffd@0:

wolffd@0: Note: unlike most algorithms, var_elim does all its computational work wolffd@0: inside of marginal_nodes, not inside of wolffd@0: enter_evidence. wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0:

Global inference methods

wolffd@0: wolffd@0: The simplest inference algorithm of all is to explicitely construct wolffd@0: the joint distribution over all the nodes, and then to marginalize it. wolffd@0: This is implemented in global_joint_inf_engine. wolffd@0: Since the size of the joint is exponential in the wolffd@0: number of discrete (hidden) nodes, this is not a very practical algorithm. wolffd@0: It is included merely for pedagogical and debugging purposes. wolffd@0:

wolffd@0: Three specialized versions of this algorithm have also been implemented, wolffd@0: corresponding to the cases where all the nodes are discrete (D), all wolffd@0: are Gaussian (G), and some are discrete and some Gaussian (CG). wolffd@0: They are called enumerative_inf_engine, wolffd@0: gaussian_inf_engine, wolffd@0: and cond_gauss_inf_engine respectively. wolffd@0:

wolffd@0: Note: unlike most algorithms, these global inference algorithms do all their computational work wolffd@0: inside of marginal_nodes, not inside of wolffd@0: enter_evidence. wolffd@0: wolffd@0: wolffd@0:

Quickscore

wolffd@0: wolffd@0: The junction tree algorithm is quite slow on the QMR network, wolffd@0: since the cliques are so big. wolffd@0: One simple trick we can use is to notice that hidden leaves do not wolffd@0: affect the posteriors on the roots, and hence do not need to be wolffd@0: included in the network. wolffd@0: A second trick is to notice that the negative findings can be wolffd@0: "absorbed" into the prior: wolffd@0: see the file wolffd@0: BNT/examples/static/mk_minimal_qmr_bnet for details. wolffd@0:

wolffd@0: wolffd@0: A much more significant speedup is obtained by exploiting special wolffd@0: properties of the noisy-or node, as done by the quickscore wolffd@0: algorithm. For details, see wolffd@0:

wolffd@0: wolffd@0: This has been implemented in BNT as a special-purpose inference wolffd@0: engine, which can be created and used as follows: wolffd@0:
wolffd@0: engine = quickscore_inf_engine(inhibit, leak, prior);
wolffd@0: engine = enter_evidence(engine, pos, neg);
wolffd@0: m = marginal_nodes(engine, i);
wolffd@0: 
wolffd@0: wolffd@0: wolffd@0:

Belief propagation

wolffd@0: wolffd@0: Even using quickscore, exact inference takes time that is exponential wolffd@0: in the number of positive findings. wolffd@0: Hence for large networks we need to resort to approximate inference techniques. wolffd@0: See for example wolffd@0: wolffd@0: The latter approximation wolffd@0: entails applying Pearl's belief propagation algorithm to a model even wolffd@0: if it has loops (hence the name loopy belief propagation). wolffd@0: Pearl's algorithm, implemented as pearl_inf_engine, gives wolffd@0: exact results when applied to singly-connected graphs wolffd@0: (a.k.a. polytrees, since wolffd@0: the underlying undirected topology is a tree, but a node may have wolffd@0: multiple parents). wolffd@0: To apply this algorithm to a graph with loops, wolffd@0: use pearl_inf_engine. wolffd@0: This can use a centralized or distributed message passing protocol. wolffd@0: You can use it as in the following example. wolffd@0:
wolffd@0: engine = pearl_inf_engine(bnet, 'max_iter', 30);
wolffd@0: engine = enter_evidence(engine, evidence);
wolffd@0: m = marginal_nodes(engine, i);
wolffd@0: 
wolffd@0: We found that this algorithm often converges, and when it does, often wolffd@0: is very accurate, but it depends on the precise setting of the wolffd@0: parameter values of the network. wolffd@0: (See the file BNT/examples/static/qmr1 to repeat the experiment for yourself.) wolffd@0: Understanding when and why belief propagation converges/ works wolffd@0: is a topic of ongoing research. wolffd@0:

wolffd@0: pearl_inf_engine can exploit special structure in noisy-or wolffd@0: and gmux nodes to compute messages efficiently. wolffd@0:

wolffd@0: belprop_inf_engine is like pearl, but uses potentials to wolffd@0: represent messages. Hence this is slower. wolffd@0:

wolffd@0: belprop_fg_inf_engine is like belprop, wolffd@0: but is designed for factor graphs. wolffd@0: wolffd@0: wolffd@0: wolffd@0:

Sampling

wolffd@0: wolffd@0: BNT now (Mar '02) has two sampling (Monte Carlo) inference algorithms: wolffd@0: wolffd@0: Note: To generate samples from a network (which is not the same as inference!), wolffd@0: use sample_bnet. wolffd@0: wolffd@0: wolffd@0: wolffd@0:

Summary of inference engines

wolffd@0: wolffd@0: wolffd@0: The inference engines differ in many ways. Here are wolffd@0: some of the major "axes": wolffd@0: wolffd@0: wolffd@0:

wolffd@0: In terms of topology, most engines handle any kind of DAG. wolffd@0: belprop_fg does approximate inference on factor graphs (FG), which wolffd@0: can be used to represent directed, undirected, and mixed (chain) wolffd@0: graphs. wolffd@0: (In the future, we plan to support exact inference on chain graphs.) wolffd@0: quickscore only works on QMR-like models. wolffd@0:

wolffd@0: In terms of node types: algorithms that use potentials can handle wolffd@0: discrete (D), Gaussian (G) or conditional Gaussian (CG) models. wolffd@0: Sampling algorithms can essentially handle any kind of node (distribution). wolffd@0: Other algorithms make more restrictive assumptions in exchange for wolffd@0: speed. wolffd@0:

wolffd@0: Finally, most algorithms are designed to give the exact answer. wolffd@0: The belief propagation algorithms are exact if applied to trees, and wolffd@0: in some other cases. wolffd@0: Sampling is considered approximate, even though, in the limit of an wolffd@0: infinite number of samples, it gives the exact answer. wolffd@0: wolffd@0:

wolffd@0: wolffd@0: Here is a summary of the properties wolffd@0: of all the engines in BNT which work on static networks. wolffd@0:

wolffd@0: wolffd@0:
wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0:
Name wolffd@0: Exact? wolffd@0: Node type? wolffd@0: topology wolffd@0:
belprop wolffd@0: approx wolffd@0: D wolffd@0: DAG wolffd@0:
belprop_fg wolffd@0: approx wolffd@0: D wolffd@0: factor graph wolffd@0:
cond_gauss wolffd@0: exact wolffd@0: CG wolffd@0: DAG wolffd@0:
enumerative wolffd@0: exact wolffd@0: D wolffd@0: DAG wolffd@0:
gaussian wolffd@0: exact wolffd@0: G wolffd@0: DAG wolffd@0:
gibbs wolffd@0: approx wolffd@0: D wolffd@0: DAG wolffd@0:
global_joint wolffd@0: exact wolffd@0: D,G,CG wolffd@0: DAG wolffd@0:
jtree wolffd@0: exact wolffd@0: D,G,CG wolffd@0: DAG wolffd@0: b
likelihood_weighting wolffd@0: approx wolffd@0: any wolffd@0: DAG wolffd@0:
pearl wolffd@0: approx wolffd@0: D,G wolffd@0: DAG wolffd@0:
pearl wolffd@0: exact wolffd@0: D,G wolffd@0: polytree wolffd@0:
quickscore wolffd@0: exact wolffd@0: noisy-or wolffd@0: QMR wolffd@0:
stab_cond_gauss wolffd@0: exact wolffd@0: CG wolffd@0: DAG wolffd@0:
var_elim wolffd@0: exact wolffd@0: D,G,CG wolffd@0: DAG wolffd@0:
wolffd@0: wolffd@0: wolffd@0: wolffd@0:

Influence diagrams/ decision making

wolffd@0: wolffd@0: BNT implements an exact algorithm for solving LIMIDs (limited memory wolffd@0: influence diagrams), described in wolffd@0: wolffd@0: LIMIDs explicitely show all information arcs, rather than implicitely wolffd@0: assuming no forgetting. This allows them to model forgetful wolffd@0: controllers. wolffd@0:

wolffd@0: See the examples in BNT/examples/limids for details. wolffd@0: wolffd@0: wolffd@0: wolffd@0: wolffd@0:

DBNs, HMMs, Kalman filters and all that

wolffd@0: wolffd@0: Click here for documentation about how to wolffd@0: use BNT for dynamical systems and sequence data. wolffd@0: wolffd@0: wolffd@0: