Daniel@0: function [beta,post,lli] = logistK(x,y,w,beta) Daniel@0: % [beta,post,lli] = logistK(x,y,beta,w) Daniel@0: % Daniel@0: % k-class logistic regression with optional sample weights Daniel@0: % Daniel@0: % k = number of classes Daniel@0: % n = number of samples Daniel@0: % d = dimensionality of samples Daniel@0: % Daniel@0: % INPUT Daniel@0: % x dxn matrix of n input column vectors Daniel@0: % y kxn vector of class assignments Daniel@0: % [w] 1xn vector of sample weights Daniel@0: % [beta] dxk matrix of model coefficients Daniel@0: % Daniel@0: % OUTPUT Daniel@0: % beta dxk matrix of fitted model coefficients Daniel@0: % (beta(:,k) are fixed at 0) Daniel@0: % post kxn matrix of fitted class posteriors Daniel@0: % lli log likelihood Daniel@0: % Daniel@0: % Let p(i,j) = exp(beta(:,j)'*x(:,i)), Daniel@0: % Class j posterior for observation i is: Daniel@0: % post(j,i) = p(i,j) / (p(i,1) + ... p(i,k)) Daniel@0: % Daniel@0: % See also logistK_eval. Daniel@0: % Daniel@0: % David Martin Daniel@0: % May 3, 2002 Daniel@0: Daniel@0: % Copyright (C) 2002 David R. Martin Daniel@0: % Daniel@0: % This program is free software; you can redistribute it and/or Daniel@0: % modify it under the terms of the GNU General Public License as Daniel@0: % published by the Free Software Foundation; either version 2 of the Daniel@0: % License, or (at your option) any later version. Daniel@0: % Daniel@0: % This program is distributed in the hope that it will be useful, but Daniel@0: % WITHOUT ANY WARRANTY; without even the implied warranty of Daniel@0: % MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Daniel@0: % General Public License for more details. Daniel@0: % Daniel@0: % You should have received a copy of the GNU General Public License Daniel@0: % along with this program; if not, write to the Free Software Daniel@0: % Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA Daniel@0: % 02111-1307, USA, or see http://www.gnu.org/copyleft/gpl.html. Daniel@0: Daniel@0: % TODO - this code would be faster if x were transposed Daniel@0: Daniel@0: error(nargchk(2,4,nargin)); Daniel@0: Daniel@0: debug = 0; Daniel@0: if debug>0, Daniel@0: h=figure(1); Daniel@0: set(h,'DoubleBuffer','on'); Daniel@0: end Daniel@0: Daniel@0: % get sizes Daniel@0: [d,nx] = size(x); Daniel@0: [k,ny] = size(y); Daniel@0: Daniel@0: % check sizes Daniel@0: if k < 2, Daniel@0: error('Input y must encode at least 2 classes.'); Daniel@0: end Daniel@0: if nx ~= ny, Daniel@0: error('Inputs x,y not the same length.'); Daniel@0: end Daniel@0: Daniel@0: n = nx; Daniel@0: Daniel@0: % make sure class assignments have unit L1-norm Daniel@0: sumy = sum(y,1); Daniel@0: if abs(1-sumy) > eps, Daniel@0: sumy = sum(y,1); Daniel@0: for i = 1:k, y(i,:) = y(i,:) ./ sumy; end Daniel@0: end Daniel@0: clear sumy; Daniel@0: Daniel@0: % if sample weights weren't specified, set them to 1 Daniel@0: if nargin < 3, Daniel@0: w = ones(1,n); Daniel@0: end Daniel@0: Daniel@0: % normalize sample weights so max is 1 Daniel@0: w = w / max(w); Daniel@0: Daniel@0: % if starting beta wasn't specified, initialize randomly Daniel@0: if nargin < 4, Daniel@0: beta = 1e-3*rand(d,k); Daniel@0: beta(:,k) = 0; % fix beta for class k at zero Daniel@0: else Daniel@0: if sum(beta(:,k)) ~= 0, Daniel@0: error('beta(:,k) ~= 0'); Daniel@0: end Daniel@0: end Daniel@0: Daniel@0: stepsize = 1; Daniel@0: minstepsize = 1e-2; Daniel@0: Daniel@0: post = computePost(beta,x); Daniel@0: lli = computeLogLik(post,y,w); Daniel@0: Daniel@0: for iter = 1:100, Daniel@0: %disp(sprintf(' logist iter=%d lli=%g',iter,lli)); Daniel@0: vis(x,y,beta,lli,d,k,iter,debug); Daniel@0: Daniel@0: % gradient and hessian Daniel@0: [g,h] = derivs(post,x,y,w); Daniel@0: Daniel@0: % make sure Hessian is well conditioned Daniel@0: if rcond(h) < eps, Daniel@0: % condition with Levenberg-Marquardt method Daniel@0: for i = -16:16, Daniel@0: h2 = h .* ((1 + 10^i)*eye(size(h)) + (1-eye(size(h)))); Daniel@0: if rcond(h2) > eps, break, end Daniel@0: end Daniel@0: if rcond(h2) < eps, Daniel@0: warning(['Stopped at iteration ' num2str(iter) ... Daniel@0: ' because Hessian can''t be conditioned']); Daniel@0: break Daniel@0: end Daniel@0: h = h2; Daniel@0: end Daniel@0: Daniel@0: % save lli before update Daniel@0: lli_prev = lli; Daniel@0: Daniel@0: % Newton-Raphson with step-size halving Daniel@0: while stepsize >= minstepsize, Daniel@0: % Newton-Raphson update step Daniel@0: step = stepsize * (h \ g); Daniel@0: beta2 = beta; Daniel@0: beta2(:,1:k-1) = beta2(:,1:k-1) - reshape(step,d,k-1); Daniel@0: Daniel@0: % get the new log likelihood Daniel@0: post2 = computePost(beta2,x); Daniel@0: lli2 = computeLogLik(post2,y,w); Daniel@0: Daniel@0: % if the log likelihood increased, then stop Daniel@0: if lli2 > lli, Daniel@0: post = post2; lli = lli2; beta = beta2; Daniel@0: break Daniel@0: end Daniel@0: Daniel@0: % otherwise, reduce step size by half Daniel@0: stepsize = 0.5 * stepsize; Daniel@0: end Daniel@0: Daniel@0: % stop if the average log likelihood has gotten small enough Daniel@0: if 1-exp(lli/n) < 1e-2, break, end Daniel@0: Daniel@0: % stop if the log likelihood changed by a small enough fraction Daniel@0: dlli = (lli_prev-lli) / lli; Daniel@0: if abs(dlli) < 1e-3, break, end Daniel@0: Daniel@0: % stop if the step size has gotten too small Daniel@0: if stepsize < minstepsize, brea, end Daniel@0: Daniel@0: % stop if the log likelihood has decreased; this shouldn't happen Daniel@0: if lli < lli_prev, Daniel@0: warning(['Stopped at iteration ' num2str(iter) ... Daniel@0: ' because the log likelihood decreased from ' ... Daniel@0: num2str(lli_prev) ' to ' num2str(lli) '.' ... Daniel@0: ' This may be a bug.']); Daniel@0: break Daniel@0: end Daniel@0: end Daniel@0: Daniel@0: if debug>0, Daniel@0: vis(x,y,beta,lli,d,k,iter,2); Daniel@0: end Daniel@0: Daniel@0: %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Daniel@0: %% class posteriors Daniel@0: function post = computePost(beta,x) Daniel@0: [d,n] = size(x); Daniel@0: [d,k] = size(beta); Daniel@0: post = zeros(k,n); Daniel@0: bx = zeros(k,n); Daniel@0: for j = 1:k, Daniel@0: bx(j,:) = beta(:,j)'*x; Daniel@0: end Daniel@0: for j = 1:k, Daniel@0: post(j,:) = 1 ./ sum(exp(bx - repmat(bx(j,:),k,1)),1); Daniel@0: end Daniel@0: Daniel@0: %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Daniel@0: %% log likelihood Daniel@0: function lli = computeLogLik(post,y,w) Daniel@0: [k,n] = size(post); Daniel@0: lli = 0; Daniel@0: for j = 1:k, Daniel@0: lli = lli + sum(w.*y(j,:).*log(post(j,:)+eps)); Daniel@0: end Daniel@0: if isnan(lli), Daniel@0: error('lli is nan'); Daniel@0: end Daniel@0: Daniel@0: %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Daniel@0: %% gradient and hessian Daniel@0: %% These are computed in what seems a verbose manner, but it is Daniel@0: %% done this way to use minimal memory. x should be transposed Daniel@0: %% to make it faster. Daniel@0: function [g,h] = derivs(post,x,y,w) Daniel@0: Daniel@0: [k,n] = size(post); Daniel@0: [d,n] = size(x); Daniel@0: Daniel@0: % first derivative of likelihood w.r.t. beta Daniel@0: g = zeros(d,k-1); Daniel@0: for j = 1:k-1, Daniel@0: wyp = w .* (y(j,:) - post(j,:)); Daniel@0: for ii = 1:d, Daniel@0: g(ii,j) = x(ii,:) * wyp'; Daniel@0: end Daniel@0: end Daniel@0: g = reshape(g,d*(k-1),1); Daniel@0: Daniel@0: % hessian of likelihood w.r.t. beta Daniel@0: h = zeros(d*(k-1),d*(k-1)); Daniel@0: for i = 1:k-1, % diagonal Daniel@0: wt = w .* post(i,:) .* (1 - post(i,:)); Daniel@0: hii = zeros(d,d); Daniel@0: for a = 1:d, Daniel@0: wxa = wt .* x(a,:); Daniel@0: for b = a:d, Daniel@0: hii_ab = wxa * x(b,:)'; Daniel@0: hii(a,b) = hii_ab; Daniel@0: hii(b,a) = hii_ab; Daniel@0: end Daniel@0: end Daniel@0: h( (i-1)*d+1 : i*d , (i-1)*d+1 : i*d ) = -hii; Daniel@0: end Daniel@0: for i = 1:k-1, % off-diagonal Daniel@0: for j = i+1:k-1, Daniel@0: wt = w .* post(j,:) .* post(i,:); Daniel@0: hij = zeros(d,d); Daniel@0: for a = 1:d, Daniel@0: wxa = wt .* x(a,:); Daniel@0: for b = a:d, Daniel@0: hij_ab = wxa * x(b,:)'; Daniel@0: hij(a,b) = hij_ab; Daniel@0: hij(b,a) = hij_ab; Daniel@0: end Daniel@0: end Daniel@0: h( (i-1)*d+1 : i*d , (j-1)*d+1 : j*d ) = hij; Daniel@0: h( (j-1)*d+1 : j*d , (i-1)*d+1 : i*d ) = hij; Daniel@0: end Daniel@0: end Daniel@0: Daniel@0: %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Daniel@0: %% debug/visualization Daniel@0: function vis (x,y,beta,lli,d,k,iter,debug) Daniel@0: Daniel@0: if debug<=0, return, end Daniel@0: Daniel@0: disp(['iter=' num2str(iter) ' lli=' num2str(lli)]); Daniel@0: if debug<=1, return, end Daniel@0: Daniel@0: if d~=3 | k>10, return, end Daniel@0: Daniel@0: figure(1); Daniel@0: res = 100; Daniel@0: r = abs(max(max(x))); Daniel@0: dom = linspace(-r,r,res); Daniel@0: [px,py] = meshgrid(dom,dom); Daniel@0: xx = px(:); yy = py(:); Daniel@0: points = [xx' ; yy' ; ones(1,res*res)]; Daniel@0: func = zeros(k,res*res); Daniel@0: for j = 1:k, Daniel@0: func(j,:) = exp(beta(:,j)'*points); Daniel@0: end Daniel@0: [mval,ind] = max(func,[],1); Daniel@0: hold off; Daniel@0: im = reshape(ind,res,res); Daniel@0: imagesc(xx,yy,im); Daniel@0: hold on; Daniel@0: syms = {'w.' 'wx' 'w+' 'wo' 'w*' 'ws' 'wd' 'wv' 'w^' 'w<'}; Daniel@0: for j = 1:k, Daniel@0: [mval,ind] = max(y,[],1); Daniel@0: ind = find(ind==j); Daniel@0: plot(x(1,ind),x(2,ind),syms{j}); Daniel@0: end Daniel@0: pause(0.1); Daniel@0: Daniel@0: % eof