On ARC (Advanced Research Computing) machines ARCUS and ARCUS-B

These instructions are for the new CMake build system. They have only been checked on ARCUS-B (May 2017). If you are having problems on ARCUS-A, the legacy Scons instructions may help SconsArchive/InstallGuides/Arc. In particular, you may need to load different modules depending on availability on ACRUS-A.

If you wish to get newer versions of Chaste dependencies than are available on ARCUS-B, or are having compatibility issues with any of the modules used in this guide, you can compile all Chaste dependencies from source: see InstallGuides/ArcCompileDependencies.

Contents

Getting Access

To access any of the ARC machines you will need to register with Advanced Research Computing. The easiest way to join is to register for a user account with an existing project - take a look at the list of projects on the registration page and talk to the person responsible for the one you would like to join. ARC runs a few training courses each year for new users. The notes for these courses are available online, and might be worth a look.

Getting Chaste

It makes sense to download Chaste in your $DATA area where there is ample space to store code, meshes and output.

cd $DATA
# Check out code base (takes a few minutes)
git clone -b develop --depth 5 https://chaste.cs.ox.ac.uk/git/chaste.git Chaste
# Check out a user project
git clone https://github.com/[RepoName]/[UserProjectName].git Chaste/projects/[UserProjectName]
#                                                    (the last parts will, of course, be your Chaste project name)
# Make a build directory
mkdir Chaste-Build

The --depth 5 flag only pulls in the last 5 commits, rather than the whole git history. This reduces the download time but prevents advanced git operations requiring the full history, which are assumed not to be needed on a HPC.

Setting The Environment

You will need CMake and the Intel compiler to compile code and RNV for compiling CellML files into Chaste compatible cell models.

In order to compile and run Chaste tests and executables you will need to set up the Chaste dependencies in your user profile. This is done by adding the following lines to your $HOME/.bash_profile file

# this section contains all the commands to be run on arcusb
module load cmake/2.8.12
module load python/2.7
module load vtk/5.10.1
module unload intel-compilers/2013 intel-mkl/2013
module load PETSc/mvapich2-2.0.1/3.5_icc-2015
module load intel-mkl/2015
module load hdf5-parallel/1.8.14_mvapich2_intel

# Some convenient variables for use with CMake
export BOOST_ROOT=/system/software/linux-x86_64/lib/boost/1_56_0/
export XERCES_DIR=/system/software/linux-x86_64/xerces-c/3.3.1/
export XSD_DIR=/system/software/linux-x86_64/lib/xsd/3.3.0-1/
export VTK_DIR=/system/software/linux-x86_64/lib/vtk/5.10.1/
export SUNDIALS_DIR=/system/software/linux-x86_64/lib/cvode/2.7.0

# These library paths needed to be manually added with SCons, they may not be needed with CMake but are left here for now.
export LD_LIBRARY_PATH=$XERCES_DIR/lib:$LD_LIBRARY_PATH
export LD_LIBRARY_PATH=$BOOST_ROOT/lib:$LD_LIBRARY_PATH
export LD_LIBRARY_PATH=$XSD_DIR/lib:$LD_LIBRARY_PATH
export LD_LIBRARY_PATH=/system/software/linux-x86_64/lib/szip/2.1/lib:$LD_LIBRARY_PATH
export LD_LIBRARY_PATH=$VTK_DIR/lib:$LD_LIBRARY_PATH

# Add chaste libraries - you may need to change this depending on where you installed (or plan to install) Chaste
export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:${DATA}/Chaste-Build/lib

At the current stage the configuration file does not set up RNV, which is required to run PyCML.

Building With CMake

It's important to build only and not to attempt to run programs on the head-node.

cd $DATA/Chaste-Build
cmake ../Chaste \
    -DCMAKE_BUILD_TYPE=RELEASE \
    -DBOOST_LIBRARYDIR=$BOOST_ROOT/lib \
    -DBOOST_INCLUDEDIR=$BOOST_ROOT/include \
    -DBoost_NO_SYSTEM_PATHS=BOOL:ON \
    -DBoost_NO_BOOST_CMAKE=BOOL:ON \
    -DXERCESC_LIBRARY=$XERCES_DIR/lib/libxerces-c.so \
    -DXERCESC_INCLUDE=$XERCES_DIR/include/ \
    -DXSD_EXECUTABLE=$XSD_DIR/bin/xsd \
    -DXSD_INCLUDE_DIR=$XSD_DIR/include/ \
    -DSUNDIALS_INCLUDE_DIR=$SUNDIALS_DIR/include/sundials \
    -DSUNDIALS_sundials_cvode_LIBRARY=$SUNDIALS_DIR/lib/libsundials_cvode.so \
    -DSUNDIALS_sundials_nvecserial_LIBRARY=$SUNDIALS_DIR/lib/libsundials_nvecserial.so \
    -DChaste_ERROR_ON_WARNING=OFF \
    -DChaste_UPDATE_PROVENANCE=OFF

# Compiling a Chaste component, like cell based
make chaste_cell_based

Notes:

  • The flag -DChaste_ERROR_ON_WARNING=OFF is used, as some of the compiler options on ARCUS may not be regularly tested with Chaste, leading to uncovered, but often harmless, warnings.
  • The flag -DChaste_UPDATE_PROVENANCE=OFF is used to prevent re-linking of all libraries with each code change. Change it to ON if you want up-to-date build information available in the code.
  • There seems to be a problem with the ARCUS-B Boost installation CMake files. The flags DBoost_NO_SYSTEM_PATHS and DBoost_NO_BOOST_CMAKE ignore them.

Running code on ARCUS

Here is an example script which runs the above test and the Chaste executable on ARCUS. Save as, for example, run_Chaste.

#!/bin/bash --login

# Name of the job 
#PBS -N TestChaste

# Use 1 node with 32 cores = 32 MPI legs 
#PBS -l nodes=1:ppn=32

# Kill after one hour 
#PBS -l walltime=01:00:00

# Send me email at the beginning and the end of the run
#PBS -m be
#PBS -M your_address_not_jmpf@cs.ox.ac.uk 

# Join output and error files
#PBS -j oe

# Copy all environmental variables
#PBS -V 

# Set up MPI
cd $PBS_O_WORKDIR
##### The appropriate include for the machine:
# . enable_hal_mpi.sh
. enable_arcus_mpi.sh
#Switch to Chaste directory
cd ${DATA}/Chaste

# A parallel test
mpirun $MPI_HOSTS ./global/build/intel/TestPetscToolsRunner
# A PyCML test
mpirun $MPI_HOSTS ./heart/build/intel/ionicmodels/TestPyCmlRunner
# A user project test
mpirun $MPI_HOSTS ./projects/jmpf/build/intel/TestVtkRunner

# A test of the executable
mpirun $MPI_HOSTS apps/src/Chaste apps/texttest/weekly/Propagation1d/ChasteParameters.xml

Submit script and see state of the queue

qsub run_Chaste.sh
qstat

More information on the Torque job scheduler is available here.

Running code on ARCUS-B

ARCUS-B uses a different job scheduler called SLURM. Information about using the SLURM scheduler can be found here http://www.arc.ox.ac.uk/content/arcus-phase-b, and here http://www.arc.ox.ac.uk/content/slurm-job-scheduler.

A sample SLURM script would be

#!/bin/bash --login

# Name of the job 
#SBATCH --job-name=TestChaste

# Use 1 node with 32 cores = 32 MPI legs 
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=32

# Kill after one hour 
#SBATCH --time=01:00:00

# Send me email at the beginning and the end, and abortion of the run
# (I prefer the FAIL option - only send emails when the process gets aborted)
#SBATCH --mail-type=ALL
#SBATCH --mail-user=your_address_not_jmpf@cs.ox.ac.uk 

# Joining output and error files is done automatically by SLURM, as well as copying the environment variables,
# and the change of working directory

# Set up MPI using the appropriate include for the machine:
. enable_arcus_b_mpi.sh

#Switch to Chaste directory
cd ${DATA}/Chaste

# A parallel test
mpirun $MPI_HOSTS ./global/build/intel/TestPetscToolsRunner
# A PyCML test
mpirun $MPI_HOSTS ./heart/build/intel/ionicmodels/TestPyCmlRunner
# A user project test
mpirun $MPI_HOSTS ./projects/jmpf/build/intel/TestVtkRunner

# A test of the executable
mpirun $MPI_HOSTS apps/src/Chaste apps/texttest/weekly/Propagation1d/ChasteParameters.xml

You can submit the script and see the state of the queue using

sbatch SCRIPT_NAME.sh
squeue

Troubleshooting

If you receive a python error of the kind

python: error while loading shared libraries: libpython2.7.so.1.0: cannot open shared object file: No such file or directory

just go into your chaste directory and copy libpython2.7.so.1.0 into the ./lib directory

cp /system/software/linux-x86_64/python/2.7.8/lib/libpython2.7.so.1.0 ./lib

This may not be the cleanest way of fixing this issue, other suggestions are welcome!