Compiling and running the PAMR wave example on vnfe[123]

NOTE: This build uses the PGI compilers, and the mpich MPI library, which is the recommended practice at the current time. It is also assumed that you have DV running on your local workstation, and that, in your vnfe[123] shell, you have set DVHOST to the name of said workstation.

First ensure that the CVSROOT and CVS_RSH environment variables have been set

setenv CVSROOT ':ext:cvs@vnfe1.physics.ubc.ca:/home2/cvs'
setenv CVS_RSH '/usr/bin/ssh'
(it is recommended that you insert the above in your ~/.cshrc).

Also ensure that you have an ~/.rhosts file, which is needed to run MPI programs. You can create such a file via

% ~matt/scripts/mkvnrhosts > ~/.rhosts

Now cd to some convenient working directory somewhere within the NFS-mounted filesystems, so that the directory is visible from all of the cluster nodes, and check out the pamr distribution

% cvs co pamr
cvs server: Updating pamr
U pamr/.laliases
U pamr/KNOWN_ISSUES
       .
       .
       .
U pamr/test/Makefile.in
U pamr/test/test1.c
cvs server: Updating pamr/wave
% cd pamr
% source ~matt/scripts/soPGI-mpich
% configure --prefix=`pwd`
creating cache ./config.cache
checking for Unix flavour... LINUX_PG
checking for gcc... /usr/local/PGI/bin/mpicc
       .
       .
       .
creating amrd/Makefile
creating examples/wave/Makefile
creating doc/Makefile
% make
echo; echo "Making in src test amrd examples/wave doc"

Making in src test amrd examples/wave doc
for f in src test amrd examples/wave doc; do \
       .
       .
       .
ls -lt PAMR_ref.ps
make[1]: Leaving directory ...
% cd examples/wave
% cp ~matt/templates/pamr-wave.sh .
Run the job interactively on the desired number of processors, 9 in this instance.
% pamr-wave.sh 9
Note that the pamr-wave.sh script uses another script, Mpirun, which automatically selects idle nodes and outputs their names to the MPI machinefile. Once the program has been launched it will also start remote xterms running top on the first and last machines listed in that file, so that you can monitor the progress of the parallel job. Type q in these windows to exit both top and the remote terminals themselves.

Once the job has completed, you can process the .sdf output with DV:

% cd run_2d
% ls *sdf
wave_2d_L0_phi_tl2_0.sdf  wave_2d_L0_phi_tl2_3.sdf  wave_2d_L0_phi_tl2_6.sdf
wave_2d_L0_phi_tl2_1.sdf  wave_2d_L0_phi_tl2_4.sdf  wave_2d_L0_phi_tl2_7.sdf
wave_2d_L0_phi_tl2_2.sdf  wave_2d_L0_phi_tl2_5.sdf  wave_2d_L0_phi_tl2_8.sdf
% sdftodv *sdf
Then select Merge All Registers in DV, enable AMR in the Options panel, and visualize the results.
Maintained by choptuik@physics.ubc.ca. Supported by CIAR, CFI and NSERC