Daniel@0: function CPD = softmax_CPD(bnet, self, varargin) Daniel@0: % SOFTMAX_CPD Make a softmax (multinomial logit) CPD Daniel@0: % Daniel@0: % To define this CPD precisely, let W be an (m x n) matrix with W(i,:) = {i-th row of B} Daniel@0: % => we can define the following vectorial function: Daniel@0: % Daniel@0: % softmax: R^n |--> R^m Daniel@0: % softmax(z,i-th)=exp(W(i,:)*z)/sum_k(exp(W(k,:)*z)) Daniel@0: % Daniel@0: % (this constructor augments z with a one at the beginning to introduce an offset term (=bias, intercept)) Daniel@0: % Now call the continuous (cts) and always observed (obs) parents X, Daniel@0: % the discrete parents (if any) Q, and this node Y then we use the discrete parent(s) just to index Daniel@0: % the parameter vectors (c.f., conditional Gaussian nodes); that is: Daniel@0: % prob(Y=i | X=x, Q=j) = softmax(x,i-th|j) Daniel@0: % where '|j' means that we are using the j-th (m x n) parameters matrix W(:,:,j). Daniel@0: % If there are no discrete parents, this is a regular softmax node. Daniel@0: % If Y is binary, this is a logistic (sigmoid) function. Daniel@0: % Daniel@0: % CPD = softmax_CPD(bnet, node_num, ...) will create a softmax CPD with random parameters, Daniel@0: % where node is the number of a node in this equivalence class. Daniel@0: % Daniel@0: % The following optional arguments can be specified in the form of name/value pairs: Daniel@0: % [default value in brackets] Daniel@0: % (Let ns(i) be the size of node i, X = ns(X), Y = ns(Y), Q1=ns(dps(1)), Q2=ns(dps(2)), ... Daniel@0: % where dps are the discrete parents; if there are no discrete parents, we set Q1=1.) Daniel@0: % Daniel@0: % discrete - the discrete parents that we want to treat like the cts ones [ [] ]. Daniel@0: % This can be used to define sigmoid belief network - see below the reference. Daniel@0: % For example suppose that Y has one cts parents X and two discrete ones: Q, C1 where: Daniel@0: % -> Q is binary (1/2) and used just to index the parameters of 'self' Daniel@0: % -> C1 is ternary (1/2/3) and treated as a cts node <=> its values appear into the linear Daniel@0: % part of the softmax function Daniel@0: % then: Daniel@0: % prob(Y|X=x, Q=q, C1=c1)= softmax(W(:,:,q)' * y) Daniel@0: % where y = [1 | delta(C1,1) delta(C1,2) delta(C1,3) | x(:)']' and delta(Y,a)=indicator(Y=a). Daniel@0: % weights - (w(:,j,a,b,...) - w(:,j',a,b,...)) is ppn to dec. boundary Daniel@0: % between j,j' given Q1=a,Q2=b,... [ randn(X,Y,Q1,Q2,...) ] Daniel@0: % offset - (b(j,a,b,...) - b(j',a,b,...)) is the offset to dec. boundary Daniel@0: % between j,j' given Q1=a,Q2=b,... [ randn(Y,Q1,Q2,...) ] Daniel@0: % Daniel@0: % e.g., CPD = softmax_CPD(bnet, i, 'offset', zeros(ns(i),1)); Daniel@0: % Daniel@0: % The following fields control the behavior of the M step, which uses Daniel@0: % a weighted version of the Iteratively Reweighted Least Squares (WIRLS) if dps_as_cps=[]; or Daniel@0: % a weighted SCG otherwise, as implemented in Netlab, and modified by Pierpaolo Brutti. Daniel@0: % Daniel@0: % clamped - 'yes' means don't adjust params during learning ['no'] Daniel@0: % max_iter - the maximum number of steps to take [10] Daniel@0: % verbose - 'yes' means print the LL at each step of IRLS ['no'] Daniel@0: % wthresh - convergence threshold for weights [1e-2] Daniel@0: % llthresh - convergence threshold for log likelihood [1e-2] Daniel@0: % approx_hess - 'yes' means approximate the Hessian for speed ['no'] Daniel@0: % Daniel@0: % For backwards compatibility with BNT2, you can also specify the parameters in the following order Daniel@0: % softmax_CPD(bnet, self, w, b, clamped, max_iter, verbose, wthresh, llthresh, approx_hess) Daniel@0: % Daniel@0: % REFERENCE Daniel@0: % For details on the sigmoid belief nets, see: Daniel@0: % - Neal (1992). Connectionist learning of belief networks, Artificial Intelligence, 56, 71-113. Daniel@0: % - Saul, Jakkola, Jordan (1996). Mean field theory for sigmoid belief networks, Journal of Artificial Intelligence Reseach (4), pagg. 61-76. Daniel@0: % Daniel@0: % For details on the M step, see: Daniel@0: % - K. Chen, L. Xu, H. Chi (1999). Improved learning algorithms for mixtures of experts in multiclass Daniel@0: % classification. Neural Networks 12, pp. 1229-1252. Daniel@0: % - M.I. Jordan, R.A. Jacobs (1994). Hierarchical Mixtures of Experts and the EM algorithm. Daniel@0: % Neural Computation 6, pp. 181-214. Daniel@0: % - S.R. Waterhouse, A.J. Robinson (1994). Classification Using Hierarchical Mixtures of Experts. In Proc. IEEE Daniel@0: % Workshop on Neural Network for Signal Processing IV, pp. 177-186 Daniel@0: Daniel@0: if nargin==0 Daniel@0: % This occurs if we are trying to load an object from a file. Daniel@0: CPD = init_fields; Daniel@0: CPD = class(CPD, 'softmax_CPD', discrete_CPD(0, [])); Daniel@0: return; Daniel@0: elseif isa(bnet, 'softmax_CPD') Daniel@0: % This might occur if we are copying an object. Daniel@0: CPD = bnet; Daniel@0: return; Daniel@0: end Daniel@0: CPD = init_fields; Daniel@0: Daniel@0: assert(myismember(self, bnet.dnodes)); Daniel@0: ns = bnet.node_sizes; Daniel@0: ps = parents(bnet.dag, self); Daniel@0: dps = myintersect(ps, bnet.dnodes); Daniel@0: cps = myintersect(ps, bnet.cnodes); Daniel@0: Daniel@0: clamped = 0; Daniel@0: CPD = class(CPD, 'softmax_CPD', discrete_CPD(clamped, ns([ps self]))); Daniel@0: Daniel@0: dps_as_cpssz = 0; Daniel@0: dps_as_cps = []; Daniel@0: % determine if any discrete parents are to be treated as cts Daniel@0: if nargin >= 3 & isstr(varargin{1}) % might have passed in 'discrete' Daniel@0: for i=1:2:length(varargin) Daniel@0: if strcmp(varargin{i}, 'discrete') Daniel@0: dps_as_cps = varargin{i+1}; Daniel@0: assert(myismember(dps_as_cps, dps)); Daniel@0: dps = mysetdiff(dps, dps_as_cps); % put out the dps treated as cts Daniel@0: CPD.dps_as_cps.ndx = find_equiv_posns(dps_as_cps, ps); Daniel@0: CPD.dps_as_cps.separator = [0 cumsum(ns(dps_as_cps(1:end-1)))]; % concatenated dps_as_cps dims separators Daniel@0: dps_as_cpssz = sum(ns(dps_as_cps)); Daniel@0: break; Daniel@0: end Daniel@0: end Daniel@0: end Daniel@0: assert(~isempty(union(cps, dps_as_cps))); % It have to be at least a cts or a dps_as_cps parents Daniel@0: self_size = ns(self); Daniel@0: cpsz = sum(ns(cps)); Daniel@0: glimsz = prod(ns(dps)); Daniel@0: CPD.dpndx = find_equiv_posns(dps, ps); % it contains only the indeces of the 'pure' dps Daniel@0: CPD.cpndx = find_equiv_posns(cps, ps); Daniel@0: Daniel@0: CPD.self = self; Daniel@0: CPD.solo = (length(ns)<=2); Daniel@0: CPD.sizes = bnet.node_sizes([ps self]); Daniel@0: Daniel@0: % set default params Daniel@0: CPD.max_iter = 10; Daniel@0: CPD.verbose = 0; Daniel@0: CPD.wthresh = 1e-2; Daniel@0: CPD.llthresh = 1e-2; Daniel@0: CPD.approx_hess = 0; Daniel@0: CPD.glim = cell(1,glimsz); Daniel@0: for i=1:glimsz Daniel@0: CPD.glim{i} = glm(dps_as_cpssz + cpsz, self_size, 'softmax'); Daniel@0: end Daniel@0: Daniel@0: if nargin >= 3 Daniel@0: args = varargin; Daniel@0: nargs = length(args); Daniel@0: if ~isstr(args{1}) Daniel@0: % softmax_CPD(bnet, self, w, b, clamped, max_iter, verbose, wthresh, llthresh, approx_hess) Daniel@0: if nargs >= 1 & ~isempty(args{1}), CPD = set_fields(CPD, 'weights', args{1}); end Daniel@0: if nargs >= 2 & ~isempty(args{2}), CPD = set_fields(CPD, 'offset', args{2}); end Daniel@0: if nargs >= 3 & ~isempty(args{3}), CPD = set_clamped(CPD, args{3}); end Daniel@0: if nargs >= 4 & ~isempty(args{4}), CPD.max_iter = args{4}; end Daniel@0: if nargs >= 5 & ~isempty(args{5}), CPD.verbose = args{5}; end Daniel@0: if nargs >= 6 & ~isempty(args{6}), CPD.wthresh = args{6}; end Daniel@0: if nargs >= 7 & ~isempty(args{7}), CPD.llthresh = args{7}; end Daniel@0: if nargs >= 8 & ~isempty(args{8}), CPD.approx_hess = args{8}; end Daniel@0: else Daniel@0: CPD = set_fields(CPD, args{:}); Daniel@0: end Daniel@0: end Daniel@0: Daniel@0: % sufficient statistics Daniel@0: % Since dsoftmax is not in the exponential family, we must store all the raw data. Daniel@0: CPD.parent_vals = []; % X(l,:) = value of cts parents in l'th example Daniel@0: CPD.self_vals = []; % Y(l,:) = value of self in l'th example Daniel@0: Daniel@0: CPD.eso_weights=[]; % weights used by the WIRLS algorithm Daniel@0: Daniel@0: % For BIC Daniel@0: CPD.nsamples = 0; Daniel@0: if ~adjustable_CPD(CPD), Daniel@0: CPD.nparams=0; Daniel@0: else Daniel@0: [W, b] = extract_params(CPD); Daniel@0: CPD.nparams= prod(size(W)) + prod(size(b)); Daniel@0: end Daniel@0: Daniel@0: %%%%%%%%%%% Daniel@0: Daniel@0: function CPD = init_fields() Daniel@0: % This ensures we define the fields in the same order Daniel@0: % no matter whether we load an object from a file, Daniel@0: % or create it from scratch. (Matlab requires this.) Daniel@0: Daniel@0: CPD.glim = {}; Daniel@0: CPD.self = []; Daniel@0: CPD.solo = []; Daniel@0: CPD.max_iter = []; Daniel@0: CPD.verbose = []; Daniel@0: CPD.wthresh = []; Daniel@0: CPD.llthresh = []; Daniel@0: CPD.approx_hess = []; Daniel@0: CPD.sizes = []; Daniel@0: CPD.parent_vals = []; Daniel@0: CPD.eso_weights=[]; Daniel@0: CPD.self_vals = []; Daniel@0: CPD.nsamples = []; Daniel@0: CPD.nparams = []; Daniel@0: CPD.dpndx = []; Daniel@0: CPD.cpndx = []; Daniel@0: CPD.dps_as_cps.ndx = []; Daniel@0: CPD.dps_as_cps.separator = [];