Daniel@0: function [bnet, LL, engine] = learn_params_em(engine, evidence, max_iter, thresh) Daniel@0: % LEARN_PARAMS_EM Set the parameters of each adjustable node to their ML/MAP values using batch EM. Daniel@0: % [bnet, LLtrace, engine] = learn_params_em(engine, data, max_iter, thresh) Daniel@0: % Daniel@0: % data{i,l} is the value of node i in case l, or [] if hidden. Daniel@0: % Suppose you have L training cases in an O*L array, D, where O is the num observed Daniel@0: % scalar nodes, and N is the total num nodes. Daniel@0: % Then you can create 'data' as follows, where onodes is the index of the observable nodes: Daniel@0: % data = cell(N, L); Daniel@0: % data(onodes,:) = num2cell(D); Daniel@0: % Of course it is possible for different sets of nodes to be observed in each case. Daniel@0: % Daniel@0: % We return the modified bnet and engine. Daniel@0: % To see the learned parameters for node i, use the construct Daniel@0: % s = struct(bnet.CPD{i}); % violate object privacy Daniel@0: % LLtrace is the learning curve: the vector of log-likelihood scores at each iteration. Daniel@0: % Daniel@0: % max_iter specifies the maximum number of iterations. Default: 10. Daniel@0: % Daniel@0: % thresh specifies the thresold for stopping EM. Default: 1e-3. Daniel@0: % We stop when |f(t) - f(t-1)| / avg < threshold, Daniel@0: % where avg = (|f(t)| + |f(t-1)|)/2 and f is log lik. Daniel@0: Daniel@0: if nargin < 3, max_iter = 10; end Daniel@0: if nargin < 4, thresh = 1e-3; end Daniel@0: Daniel@0: verbose = 1; Daniel@0: Daniel@0: loglik = 0; Daniel@0: previous_loglik = -inf; Daniel@0: converged = 0; Daniel@0: num_iter = 1; Daniel@0: LL = []; Daniel@0: Daniel@0: while ~converged & (num_iter <= max_iter) Daniel@0: [engine, loglik] = EM_step(engine, evidence); Daniel@0: if verbose, fprintf('EM iteration %d, ll = %8.4f\n', num_iter, loglik); end Daniel@0: num_iter = num_iter + 1; Daniel@0: converged = em_converged(loglik, previous_loglik, thresh); Daniel@0: previous_loglik = loglik; Daniel@0: LL = [LL loglik]; Daniel@0: end Daniel@0: if verbose, fprintf('\n'); end Daniel@0: Daniel@0: bnet = bnet_from_engine(engine); Daniel@0: Daniel@0: %%%%%%%%% Daniel@0: Daniel@0: function [engine, loglik] = EM_step(engine, cases) Daniel@0: Daniel@0: bnet = bnet_from_engine(engine); % engine contains the old params that are used for the E step Daniel@0: CPDs = bnet.CPD; % these are the new params that get maximized Daniel@0: num_CPDs = length(CPDs); Daniel@0: adjustable = zeros(1,num_CPDs); Daniel@0: for e=1:num_CPDs Daniel@0: adjustable(e) = adjustable_CPD(CPDs{e}); Daniel@0: end Daniel@0: adj = find(adjustable); Daniel@0: n = length(bnet.dag); Daniel@0: Daniel@0: for e=adj(:)' Daniel@0: CPDs{e} = reset_ess(CPDs{e}); Daniel@0: end Daniel@0: Daniel@0: loglik = 0; Daniel@0: ncases = size(cases, 2); Daniel@0: for l=1:ncases Daniel@0: evidence = cases(:,l); Daniel@0: [engine, ll] = enter_evidence(engine, evidence); Daniel@0: loglik = loglik + ll; Daniel@0: hidden_bitv = zeros(1,n); Daniel@0: hidden_bitv(isemptycell(evidence))=1; Daniel@0: for i=1:n Daniel@0: e = bnet.equiv_class(i); Daniel@0: if adjustable(e) Daniel@0: fmarg = marginal_family(engine, i); Daniel@0: CPDs{e} = update_ess(CPDs{e}, fmarg, evidence, bnet.node_sizes, bnet.cnodes, hidden_bitv); Daniel@0: end Daniel@0: end Daniel@0: end Daniel@0: Daniel@0: for e=adj(:)' Daniel@0: CPDs{e} = maximize_params(CPDs{e}); Daniel@0: end Daniel@0: Daniel@0: engine = update_engine(engine, CPDs); Daniel@0: Daniel@0: