Building PAMR/AMRD and running the wave example on orcinus

NOTE: This page illustrates the full process of building the PAMR/AMRD libraries from source, and then running the wave example using batch submission. It is provided primarily as a template should you want to install the software on another system on which Matt has not already installed the libraries. In particular, it is best if you do NOT use your own copy of the libraries on orcinus (and other Westgrid / Compute Canada systems); rather you should use Matt's version per the example HERE.

Commands that you will need to type are in bold. It is also assumed that DV is running on your local workstation, and that in your orcinus shell you have set DVHOST to the name of said workstation, e.g.
% setenv DVHOST bh0.phas.ubc.ca 
First ensure that the CVSROOT and CVS_RSH environment variables have been set as follows
setenv CVSROOT ':ext:cvs@bh1.phas.ubc.ca:/home/cvs'
setenv CVS_RSH '/usr/bin/ssh'
(it is recommended that you insert the above in ~/.cshrc or ~/.cshrc.user).

Now cd to some convenient working directory and check out the pamr distribution

% cvs co pamr
cvs server: Updating pamr
U pamr/.laliases
U pamr/KNOWN_ISSUES
       .
       .
       .
U pamr/test/Makefile.in
U pamr/test/test1.c
cvs server: Updating pamr/wave

Once the check-out is complete, change to the pamr directory and build the distribution: note that the source command sets some environment variables needed for proper compilation of the software.
% cd pamr
% source ~matt/scripts/soINTEL-mpi-orc
MPI_HOME=/global/software/openmpi-1.4.3/intel/
.
.
.
CC = /global/software/openmpi-1.4.3/intel//bin/mpicc
LDFLAGS = -O3 ...

% configure --prefix=`pwd`
checking for gcc... /global/software/openmpi-1.4.3/intel//bin/mpicc
checking for C compiler default output file name... a.out
.
. . config.status: creating w/examples/wave/Makefile
config.status: creating w/examples/nbs/Makefile
config.status: creating w/doc/Makefile

% make
for f in src amrd doc test examples/wave examples/nbs w; do \
(cd $f; make install) \
sdftodv *sdf done
. .
.
Made all ... no installation
make[2]: Leaving directory `/global/home/matt/tmp/pamr/w/examples/nbs'
make[1]: Leaving directory `/global/home/matt/tmp/pamr/w'

Now change to the examples/wave directory---it should contain an executable, wave

% cd examples/wave % ls -l wave
-rwxr-xr-x 1 matt matt 858605 Jun 13 10:40 wave*
Change to the run_2d subdirectory
% cd run_2d
Copy an appropriate PBS batch submission file for the example, and submit the job---which uses 8 processors---to the batch queue using qsub
% cp ~matt/templates/pamr-wave.pbs .
% qsub pamr-wave.pbs
Wait for the batch job to finish: you can use qstat -u $LOGNAME command to monitor the status of your job, but you can tell when the job has completed by the appearance of the standard output and standard error files associated with the run.  These will have names of the form pamr-wave.o[Job id] and pamr-wave.e[Job id] where [Job id] is the numeric part of the Job id which is returned by the qsub command.

Once the run has completed, the directory should contain 8 .sdf  files, one for each of the 8 processors that were used in the sample computation.
% cd run_2d
% ls *sdf
wave_2d_L0_phi_tl2_0.sdf wave_2d_L0_phi_tl2_3.sdf wave_2d_L0_phi_tl2_6.sdf
wave_2d_L0_phi_tl2_1.sdf wave_2d_L0_phi_tl2_4.sdf wave_2d_L0_phi_tl2_7.sdf
wave_2d_L0_phi_tl2_2.sdf wave_2d_L0_phi_tl2_5.sdf
Send all of these to DV using sdftodv:
% sdftodv *sdf
Then select Merge -> All Registers in DV, enable AMR in the Options panel, and visualize the results.
Maintained by choptuik@physics.ubc.ca. Supported by CIAR, CFI and NSERC