On ARC (Advanced Research Computing) machines such as HAL and SAL

Getting onto the machine

For access to HAL, SAL, or any of the other OeRC machines, you will need to register with Advanced Research Computing (ARC, formerly the Oxford Supercomputing Centre). The easiest way to join is to register for a user account with an existing project - take a look at the list of projects on the registration page and talk to the person responsible for the one you would like to join.

ARC runs a few training courses each year for new users. The notes for these courses are available online, and might be worth a look.

These instructions were last checked by Louise and Beth in Spring 2014 (using cardiac Chaste on HAL). Changes may be required to get Chaste running on ARCUS.

Setting the environment

You will need SCons and the Intel compiler to compile code and RNV for compiling CellML files into Chaste compatible cell models.

As of February 2014, the Intel compiler was automatically set up, but SCons and VTK still need to be added. Another required dependency, Amara, is available if you load the Python 2.6 module.

For these add the following to your $HOME/.bashrc file:

module add scons
module add python/2.7
module load vtk/5.10.1
module load intel-compilers/2013
module load intel-mkl/2013

#Path for PyCML Python helper 
export PATH=${PATH}:/system/software/hal/lib/rnv-1.7.8/

#Libraries for running Chaste
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/system/software/redqueen/libs/boost-1_45_0/lib:/system/software/hal/lib/xerces-c/lib:/home/system/software/redqueen/libs/szip-2.1/lib:${DATA}/Chaste/lib

You should check that the modules have been loaded correctly by using the module list command.

Getting Chaste

It makes sense to download Chaste in your $DATA area where there is ample space to store code, meshes and output.

cd $DATA
# Check out code base (takes a few minutes)
svn co https://chaste.cs.ox.ac.uk/svn/chaste/trunk Chaste --username jmpf@comlab.ox.ac.uk
#                                                         (the last bit will, of course, be your Chaste login)
# Check out a user project
svn co https://chaste.cs.ox.ac.uk/svn/chaste/projects/jmpf Chaste/projects/jmpf
#                                                    (the last parts will, of course, be your Chaste project name)

Compiling a test

It's important to compile only and not to attempt to run programs on the head-node. As of r15416 the SCons build system should automatically pick up a configuration file based on previous configurations written by Nejib and Joe. This configuration does not support CVODE - the changes to the hostconfig file to enable CVODE will be provided on this page later.

cd $DATA/Chaste

# Compiling a simple parallel test
scons build=Intel compile_only=1 test_suite=global/test/TestPetscTools.hpp
# Compiling PyCml test
scons b=Intel co=1 ts=heart/test/ionicmodels/TestPyCml.hpp
# Compiling a user project test
scons b=Intel co=1 ts=projects/jmpf/test/TestVtk.hpp

#Compiling the main Chaste executable
scons b=Intel co=1 exe=1 chaste_libs=1 apps

Running code

Here is an example script which runs the above test and the Chaste executable. Save as, for example, run_Chaste.

#!/bin/bash --login

# Name of the job 
#PBS -N TestChaste

# Use 1 node with 8 cores = 8 MPI legs 
#PBS -l nodes=1:ppn=8

# Kill after one hour 
#PBS -l walltime=01:00:00

# Send me email at the beginning and the end of the run
#PBS -m be
#PBS -M jmpf@cs.ox.ac.uk 

# Join output and error files
#PBS -j oe

# Copy all environmental variables
#PBS -V 

# Set up MPI
cd $PBS_O_WORKDIR
. enable_hal_mpi.sh

#Switch to Chaste directory
cd ${DATA}/Chaste

# A parallel test
mpirun $MPI_HOSTS ./global/build/intel/TestPetscToolsRunner
# A PyCML test
mpirun $MPI_HOSTS ./heart/build/intel/ionicmodels/TestPyCmlRunner
# A user project test
mpirun $MPI_HOSTS ./projects/jmpf/build/intel/TestVtkRunner

# A test of the executable
mpirun $MPI_HOSTS apps/src/Chaste apps/texttest/weekly/Propagation1d/ChasteParameters.xml

Submit script and see state of the queue

qsub run_Chaste.sh
qstat

More information on the Torque job scheduler is available here.

Using CVODE

There are some problems with using the default oerc.py found in python/hostconfig/machines if you want to use CVODE. To get around these problems, make a copy of oerc.py at python/hostconfig/local.py and replace the CVODE section at the end with the following.

    # Chaste may also optionally link against CVODE.
    # CVODE is not installed - line below is now set to "True"
    use_cvode = int(prefs.get('use-cvode', True))
    if use_cvode:
        # Look for the version of CVODE in the folder where it is located (part of PETSc)
        DetermineCvodeVersion('/system/software/hal/lib/PETSc/petsc-3.0.0-p12/icc-2011/include')
        # Now add the CVODE libraries to the list
        other_libraries.extend(['sundials_cvode', 'sundials_nvecserial'])

Troubleshooting

If you receive a python error of the kind

python: error while loading shared libraries: libpython2.7.so.1.0: cannot open shared object file: No such file or directory

just go into your chaste directory and copy libpython2.7.so.1.0 into the ./lib directory

cp /system/software/linux-x86_64/python/2.7.8/lib/libpython2.7.so.1.0 ./lib

This is not the cleanest way of fixing the issue, but since HAL is going to be offline soon let's not bother about it further.