annotate toolboxes/FullBNT-1.0.7/HMM/mdp_sample.m @ 0:e9a9cd732c1e tip

first hg version after svn
author wolffd
date Tue, 10 Feb 2015 15:05:51 +0000
parents
children
rev   line source
wolffd@0 1 function state = sample_mdp(prior, trans, act)
wolffd@0 2 % SAMPLE_MDP Sample a sequence of states from a Markov Decision Process.
wolffd@0 3 % state = sample_mdp(prior, trans, act)
wolffd@0 4 %
wolffd@0 5 % Inputs:
wolffd@0 6 % prior(i) = Pr(Q(1)=i)
wolffd@0 7 % trans{a}(i,j) = Pr(Q(t)=j | Q(t-1)=i, A(t)=a)
wolffd@0 8 % act(a) = A(t), so act(1) is ignored
wolffd@0 9 %
wolffd@0 10 % Output:
wolffd@0 11 % state is a vector of length T=length(act)
wolffd@0 12
wolffd@0 13 len = length(act);
wolffd@0 14 state = zeros(1,len);
wolffd@0 15 state(1) = sample_discrete(prior);
wolffd@0 16 for t=2:len
wolffd@0 17 state(t) = sample_discrete(trans{act(t)}(state(t-1),:));
wolffd@0 18 end