Daniel@0: O = 3; Daniel@0: Q = 2; Daniel@0: Daniel@0: % "true" parameters Daniel@0: prior0 = normalise(rand(Q,1)); Daniel@0: transmat0 = mk_stochastic(rand(Q,Q)); Daniel@0: obsmat0 = mk_stochastic(rand(Q,O)); Daniel@0: Daniel@0: % training data Daniel@0: T = 5; Daniel@0: nex = 10; Daniel@0: data = dhmm_sample(prior0, transmat0, obsmat0, T, nex); Daniel@0: Daniel@0: % initial guess of parameters Daniel@0: prior1 = normalise(rand(Q,1)); Daniel@0: transmat1 = mk_stochastic(rand(Q,Q)); Daniel@0: obsmat1 = mk_stochastic(rand(Q,O)); Daniel@0: Daniel@0: % improve guess of parameters using EM Daniel@0: [LL, prior2, transmat2, obsmat2] = dhmm_em(data, prior1, transmat1, obsmat1, 'max_iter', 5); Daniel@0: LL Daniel@0: Daniel@0: % use model to compute log likelihood Daniel@0: loglik = dhmm_logprob(data, prior2, transmat2, obsmat2) Daniel@0: % log lik is slightly different than LL(end), since it is computed after the final M step