Evaluate the convergence of our algorithms
We have four codes that should be computing identical quantities in the limit as some computational resource parameter goes to infinity. In this experiment, we test to make sure all the algorithms converge as we increase the computational resourses supplied. Convergence is measured as the change in the solution when adding a bit more computational power.
This driver is a specific call for the har500 matrix.
Contents
Experimental setup
This experiment should be run from the rapr/experiments/correctness directory. To reduce the computational load, we'll load the cs-stanford graph and just take the largest strong component.
cwd = pwd; dirtail = 'experiments/correctness'; if strcmp(cwd(end-length(dirtail)+1:end),dirtail) == 0 warning('%s should be executed from rapr/%s\n', mfilename, dirtail); end load ('../../data/harvard500.mat'); P = Pcc;
Add the necessary path
addpath('../../matlab');
Show convergence of each algorithm
We'll use a series of distributions and show how each algorithm converges. Then, for each algorithm and each distribution, we will look at how much each solution changes from one step to the next. Again, we use the same distributions we've been using throughout the paper.
Nmc=1e5; Npd=Nmc; gqNs=0:10:100; pceNs=gqNs; mcalg='direct';pcealg='direct';gqalg='direct'; du1 = alphadist('unif',0.6,0.9); % uniform [0.6,0.9] db1 = alphadist('beta',2,16,0,1); % skewed right db2 = alphadist('beta',1,1,0.1,0.9); % equicentric db3 = alphadist('beta',-0.5,-0.5,0.2,0.7); % bi-modal ds = {du1,db1,db2,db3}; nds = length(ds); results = []; [results(1:nds).d] = deal(ds{:}); % intialize results
Compute a monte-carlo approximation and store the convergence results
tic; for di=1:nds [ex stdx alphas conv]=mcrapr(P,Nmc,results(di).d,mcalg); results(di).ex.mc=ex; results(di).stdx.mc=stdx; results(di).conv.mc=conv; end fprintf('... %7i monte carlo samples took %f seconds\n', Nmc, toc); save 'converge-results-har500.mat' results P % save intermediate results
... 100000 monte carlo samples took 738.411131 seconds
Compute a path damping approximation and store the convergence results
tic; for di=1:nds [ex stdx pdcoeffs conv] = pdrapr(P,results(di).d,0,Npd); results(di).ex.pd=ex; results(di).stdx.pd=stdx; results(di).conv.pd=conv; end fprintf('... %7i path damping iterations took %f seconds\n', Npd, toc); save 'converge-results-har500.mat' results P % save intermediate results
Warning: not computing stdx Warning: not computing stdx Warning: not computing stdx Warning: not computing stdx ... 100000 path damping iterations took 664.959450 seconds
Compute a seris of GQ approximations and explicitly compute the changes that result.
tic for di=1:nds conv=zeros(length(gqNs),3); d=results(di).d; for Ni=1:length(gqNs) [ex stdx] = gqrapr(P,gqNs(Ni)+1,d,gqalg); [ex1 stdx1] = gqrapr(P,gqNs(Ni)+2,d,gqalg); conv(Ni,:) = [gqNs(Ni),norm(ex-ex1,1),norm(stdx-stdx1,1)]; end results(di).ex.gq=ex; results(di).stdx.gq=stdx; results(di).conv.gq=conv; end fprintf('... %i GQPR solves took %f seconds\n',sum(gqNs)+sum(gqNs+1),toc); save 'converge-results-har500.mat' results P % save intermediate results
... 1111 GQPR solves took 8.610218 seconds
Compute a seris of PCE approximations and explicitly compute the changes that result. (Note that this will be identical to the GQ results.)
tic for di=1:nds conv=zeros(length(pceNs),3); d=results(di).d; for Ni=1:length(pceNs) [ex stdx] = pcerapr(P,pceNs(Ni),d,pcealg); [ex1 stdx1] = pcerapr(P,pceNs(Ni)+1,d,pcealg); conv(Ni,:) = [pceNs(Ni),norm(ex-ex1,1),norm(stdx-stdx1,1)]; end results(di).ex.pce=ex; results(di).stdx.pce=stdx; results(di).conv.pce=conv; end fprintf('... %i PCEPR solves took %f seconds\n',sum(pceNs)+sum(pceNs+1),toc);
Warning: stdx not computed when N=0 Warning: stdx not computed when N=0 Warning: stdx not computed when N=0 Warning: stdx not computed when N=0 ... 1111 PCEPR solves took 79.656474 seconds
Save the results
save 'converge-results-har500.mat' results P
Plot the convergence results by algorithm.
Each distribution is drawn with its native colors from previous plots with points indicated by '.' for an expectation and '+' for a standard deviation. We do not es
algs={'mc','pd','gq','pce'}; nas=length(algs); ls={'LineWidth',0.8};ms={'MarkerSize',15}; s={'-.','-','--',':'}; c={[1 0.5 0.5],[0.5 0.5 1],[0 0.75 0.5],[1 0 0.25]}; for ai=1:nas figure(ai); set(gcf,'Color','w','defaultaxesfontsize',12); clf; for di=1:nds cd=results(di).conv.(algs{ai}); x=cd(:,1); y1=cd(:,2); if log10(max(x))>2.5, pfun='loglog'; else pfun='semilogy'; end if size(cd,2)>2, h=feval(pfun,x,y1,['.',s{di}],x,cd(:,3),['+',s{di}]); else h=feval(pfun,x,y1,['.',s{di}]); end, set(h(1),ms{:}); hold on; set(h,'Color',c{di},ls{:}); pbaspect([2.5 1 1]); ylim([1e-16 1]); box off; print(gcf,sprintf('converge-har500-%s.eps',algs{ai}),'-depsc2'); end end



