Mercurial > hg > camir-ismir2012
annotate toolboxes/FullBNT-1.0.7/HMM/mdp_sample.m @ 0:cc4b1211e677 tip
initial commit to HG from
Changeset:
646 (e263d8a21543) added further path and more save "camirversion.m"
author | Daniel Wolff |
---|---|
date | Fri, 19 Aug 2016 13:07:06 +0200 |
parents | |
children |
rev | line source |
---|---|
Daniel@0 | 1 function state = sample_mdp(prior, trans, act) |
Daniel@0 | 2 % SAMPLE_MDP Sample a sequence of states from a Markov Decision Process. |
Daniel@0 | 3 % state = sample_mdp(prior, trans, act) |
Daniel@0 | 4 % |
Daniel@0 | 5 % Inputs: |
Daniel@0 | 6 % prior(i) = Pr(Q(1)=i) |
Daniel@0 | 7 % trans{a}(i,j) = Pr(Q(t)=j | Q(t-1)=i, A(t)=a) |
Daniel@0 | 8 % act(a) = A(t), so act(1) is ignored |
Daniel@0 | 9 % |
Daniel@0 | 10 % Output: |
Daniel@0 | 11 % state is a vector of length T=length(act) |
Daniel@0 | 12 |
Daniel@0 | 13 len = length(act); |
Daniel@0 | 14 state = zeros(1,len); |
Daniel@0 | 15 state(1) = sample_discrete(prior); |
Daniel@0 | 16 for t=2:len |
Daniel@0 | 17 state(t) = sample_discrete(trans{act(t)}(state(t-1),:)); |
Daniel@0 | 18 end |