Mercurial > hg > smallbox
comparison DL/Majorization Minimization DL/mm1.m @ 155:b14209313ba4 ivand_dev
Integration of Majorization Minimisation Dictionary Learning
author | Ivan Damnjanovic lnx <ivan.damnjanovic@eecs.qmul.ac.uk> |
---|---|
date | Mon, 22 Aug 2011 11:46:35 +0100 |
parents | |
children |
comparison
equal
deleted
inserted
replaced
154:0de08f68256b | 155:b14209313ba4 |
---|---|
1 function [unhat,er] = mm1(Phi,x,u0,to,lambda,maxIT,eps,map) | |
2 %% Iterative Soft Thresholding (with optional debiasing) | |
3 % | |
4 % Phi = Normalized Dictionary | |
5 % x = Signal(x). This can be a vector or a matrix | |
6 % u0 = Initial guess for the coefficients | |
7 % to = 1/(step size) . It is larger than spectral norm of dictionary Phi | |
8 % lambda = Lagrangian multiplier. (regulates shrinkage) | |
9 % eps = Stopping criterion for iterative softthresholding and MM dictionary update | |
10 % map = Debiasing. 0 = No, 1 = Yes | |
11 % unhat = Updated coefficients | |
12 % er = Objective cost | |
13 %% | |
14 cont = 1; | |
15 in = 1; | |
16 % un = zeros(size(u0,1),size(u0,2)); | |
17 un = u0; | |
18 c1 = (1/to^2)*Phi'*x; | |
19 c2 = (1/to^2)*(Phi'*Phi); | |
20 %%%% | |
21 while (cont && (in<=maxIT)) | |
22 unold = un; | |
23 %%%%%% Soft Thresholding %%%%%%% | |
24 alphap = (un + c1 - c2*un); | |
25 un = (alphap-(lambda/(2*to^2))*sign(alphap)).*(abs(alphap)>=(lambda/(2*to^2))); | |
26 in = in+1; | |
27 cont = sum(sum((unold-un).^2))>eps; | |
28 end | |
29 %%%%%%%%%% | |
30 if map == 1, | |
31 %% Mapping on the selected space %%%% | |
32 [uN,uM] = size(un); | |
33 unhat = zeros(uN,uM); | |
34 for l = 1:uM, | |
35 unz = (abs(un(:,l))>0); | |
36 M = diag(unz); | |
37 PhiNew = Phi*M; | |
38 PhiS = PhiNew(:,unz); | |
39 unt = inv(PhiS'*PhiS+.0001*eye(sum(unz)))*PhiS'*x(:,l); | |
40 unhat(unz,l) = unt; | |
41 end | |
42 else | |
43 unhat = un; | |
44 end | |
45 %%% Cost function calculation | |
46 if map == 1, | |
47 er = sum(sum((Phi*unhat-x).^2))+lambda*(sum(sum(abs(unhat)>0))); %% l_0 Cost function | |
48 else | |
49 er = sum(sum((Phi*unhat-x).^2))+lambda*(sum(sum(abs(unhat)))); | |
50 end |