nikcleju@4: function beta = RecommendedTST(X,Y, nsweep,tol,xinitial,ro) nikcleju@4: nikcleju@4: % function beta=RecommendedTST(X,y, nsweep,tol,xinitial,ro) nikcleju@4: % This function gets the measurement matrix and the measurements and nikcleju@4: % the number of runs and applies the TST algorithm with optimally tuned parameters nikcleju@4: % to the problem. For more information you may refer to the paper, nikcleju@4: % "Optimally tuned iterative reconstruction algorithms for compressed nikcleju@4: % sensing," by Arian Maleki and David L. Donoho. nikcleju@4: % X : Measurement matrix; We assume that all the columns have nikcleju@4: % almost equal $\ell_2$ norms. The tunning has been done on nikcleju@4: % matrices with unit column norm. nikcleju@4: % y : output vector nikcleju@4: % nsweep : number of iterations. The default value is 300. nikcleju@4: % tol : if the relative prediction error i.e. ||Y-Ax||_2/ ||Y||_2 < nikcleju@4: % tol the algorithm will stop. If not provided the default nikcleju@4: % value is zero and tha algorithm will run for nsweep nikcleju@4: % iterations. The Default value is 0.00001. nikcleju@4: % xinitial : This is an optional parameter. It will be used as an nikcleju@4: % initialization of the algorithm. All the results mentioned nikcleju@4: % in the paper have used initialization at the zero vector nikcleju@4: % which is our default value. For default value you can enter nikcleju@4: % 0 here. nikcleju@4: % ro : This is a again an optional parameter. If not given the nikcleju@4: % algorithm will use the default optimal values. It specifies nikcleju@4: % the sparsity level. For the default value you may also used if nikcleju@4: % rostar=0; nikcleju@4: % nikcleju@4: % Outputs: nikcleju@4: % beta : the estimated coeffs. nikcleju@4: % nikcleju@4: % References: nikcleju@4: % For more information about this algorithm and to find the papers about nikcleju@4: % related algorithms like CoSaMP and SP please refer to the paper mentioned nikcleju@4: % above and the references of that paper. nikcleju@4: nikcleju@4: nikcleju@4: colnorm=mean((sum(X.^2)).^(.5)); nikcleju@4: X=X./colnorm; nikcleju@4: Y=Y./colnorm; nikcleju@4: [n,p]=size(X); nikcleju@4: delta=n/p; nikcleju@4: if nargin<3 nikcleju@4: nsweep=300; nikcleju@4: end nikcleju@4: if nargin<4 nikcleju@4: tol=0.00001; nikcleju@4: end nikcleju@4: if nargin<5 | xinitial==0 nikcleju@4: xinitial = zeros(p,1); nikcleju@4: end nikcleju@4: if nargin<6 | ro==0 nikcleju@4: ro=0.044417*delta^2+ 0.34142*delta+0.14844; nikcleju@4: end nikcleju@4: nikcleju@4: nikcleju@4: k1=floor(ro*n); nikcleju@4: k2=floor(ro*n); nikcleju@4: nikcleju@4: nikcleju@4: %initialization nikcleju@4: x1=xinitial; nikcleju@4: I=[]; nikcleju@4: nikcleju@4: for sweep=1:nsweep nikcleju@4: r=Y-X*x1; nikcleju@4: c=X'*r; nikcleju@4: [csort, i_csort]=sort(abs(c)); nikcleju@4: I=union(I,i_csort(end-k2+1:end)); nikcleju@4: xt = X(:,I) \ Y; nikcleju@4: [xtsort, i_xtsort]=sort(abs(xt)); nikcleju@4: nikcleju@4: J=I(i_xtsort(end-k1+1:end)); nikcleju@4: x1=zeros(p,1); nikcleju@4: x1(J)=xt(i_xtsort(end-k1+1:end)); nikcleju@4: I=J; nikcleju@4: if norm(Y-X*x1)/norm(Y)