On OeRC (Oxford Supercomputing Clusters) such as HAL and SAL
Getting onto the machine
For access to HAL, SAL, or any of the other OeRC machines, you will need to register with Advanced Research Computing (ARC, formerly the Oxford Supercomputing Centre). The easiest way to join is to register for a user account with an existing project - take a look at the list of projects on the registration page and talk to the person responsible for the one you would like to join.
ARC runs a few training courses each year for new users. The notes for these courses are available online, and might be worth a look.
Setting the environment
You will need SCons and the Intel compiler to compile code and RNV for compiling CellML files into Chaste compatible cell models.
As of February 2014, the Intel compiler was automatically set up, but SCons will still need to be added. Another required dependency, Amara, is available if you load the Python 2.6 module.
For these add the following to your $HOME/.bashrc file:
module add scons module add python/2.6 #Path for PyCML Python helper export PATH=${PATH}:/system/software/hal/lib/rnv-1.7.8/ #Libraries for running Chaste LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/system/software/redqueen/libs/boost-1_45_0/lib:/system/software/hal/lib/xerces-c/lib:/home/system/software/redqueen/libs/szip-2.1/lib :${DATA}/Chaste/lib:/system/software/linux-x86_64/lib/vtk/5.10.1/lib/vtk-5.10
The .bashrc file won't run automatically when you log in, so make sure you remember to do so.
Getting Chaste
It makes sense to download Chaste in your $DATA area where there is ample space to store code, meshes and output.
cd $DATA # Check out code base (takes a few minutes) svn co https://chaste.cs.ox.ac.uk/svn/chaste/trunk Chaste --username jmpf@comlab.ox.ac.uk # (the last bit will, of course, be your Chaste login) # Check out a user project svn co https://chaste.cs.ox.ac.uk/svn/chaste/projects/jmpf Chaste/projects/jmpf # (the last parts will, of course, be your Chaste project name)
Compiling a test
It's important to compile only and not to attempt to run programs on the head-node. As of r15416 the SCons build system should automatically pick up a configuration file based on previous configurations written by Nejib and Joe.
cd $DATA/Chaste # Compiling a simple parallel test scons build=Intel compile_only=1 test_suite=global/test/TestPetscTools.hpp # Compiling PyCml test scons b=Intel co=1 ts=heart/test/ionicmodels/TestPyCml.hpp # Compiling a user project test scons b=Intel co=1 ts=projects/jmpf/test/TestVtk.hpp #Compiling the main Chaste executable scons b=Intel co=1 exe=1 chaste_libs=1 apps
Running code
Here is an example script which runs the above test and the Chaste executable. Save as, for example, run_Chaste.
#!/bin/bash --login # Name of the job #PBS -N TestChaste # Use 1 node with 8 cores = 8 MPI legs #PBS -l nodes=1:ppn=8 # Kill after one hour #PBS -l walltime=01:00:00 # Send me email at the beginning and the end of the run #PBS -m be #PBS -M jmpf@cs.ox.ac.uk # Join output and error files #PBS -j oe # Copy all environmental variables #PBS -V # Set up MPI cd $PBS_O_WORKDIR . enable_hal_mpi.sh #Switch to Chaste directory cd ${DATA}/Chaste # A parallel test mpirun $MPI_HOSTS ./global/build/intel/TestPetscToolsRunner # A PyCML test mpirun $MPI_HOSTS ./heart/build/intel/ionicmodels/TestPyCmlRunner # A user project test mpirun $MPI_HOSTS ./projects/jmpf/build/intel/TestVtkRunner # A test of the executable mpirun $MPI_HOSTS apps/src/Chaste apps/texttest/weekly/Propagation1d/ChasteParameters.xml
Submit script and see state of the queue
qsub run_Chaste.sh qstat
More information on the Torque job scheduler is available here.