Testing Strategy
We use Test Driven Development.
Automated testing
Chaste runs a large number of automated tests to verify the correctness of code before merging. These include:
Continuous
: A suite of tests covering all functionality.Nightly
: A small number of long tests which verify that the output of certain simulations and more computationally expensive functionality remains unchanged.Weekly
: Several long simulations.Parallel
: Tests run in parallel.- Other tests: coverage testing, profiling, memory testing, portability, and documentation.
The automated testing uses GitHub actions, and the output of these tests can be viewed on the GitHub actions interface.
Tests with specific output
Profiling GProf
: this verifies that Chaste performance is not degraded over time. It logs profiling information, including compilation time and the time taken to run each test.Memory testing
: this checks for memory leaks in the code.Coverage
: this checks for portions of the code that are not covered by tests. We aim for 100% test coverage.Portability
: this checks for compatibility with various supported versions of Chaste dependencies.Doxygen
: this checks how much of the code is documented.
Unit testing
We use the CxxTest testing framework.
For each class, write a suite of tests called TestClassName.hpp
where ClassName
is the name of the class. Further tests
can be called TestClassNameSomethingElse
, or
just TestSomethingElse
if lots of classes are being tested.
In order for a test to be run during a build, it must be included in a
TypeTestPack.txt
file. Type
can be
Continuous
for continuous tests, Nightly
for longer tests, or Weekly
for
very long tests. There is also a Parallel
type for tests run in parallel.
The results of manually run builds are stored on your local computer. The results of automatic builds can be viewed on the GitHub actions interface.
There is a script
run as part of the build process that checks for any test hpp files which
are not listed in a TypeTestPack.txt
file. It will
appear in test summaries as a test called ‘OrphanedTests’, and is deemed
to have failed if any such tests are found. The output lists any orphaned
tests found, and also all types of test packs found.
If two test suites have the same name, this would confuse the system. There is another script run as part of the build process that checks for duplicate file names, and flags them up as a failed ‘DuplicateFileNames’ test.
Test file locations
In the following, filenames are given relative to the trunk. The trunk will be the working directory when tests are called. Therefore path names as below should be used when opening files in tests. component refers to the component in which the class (or most significant class tested if more than one) resides.
Type | Location |
---|---|
Test suite file | component/test/ |
Input files | component/test/data/ |
Output files generated by tests
When opening output files, an instance of OutputFileHandler
must be used.
This takes a relative directory name, and places output files in a suitable
location, with the relative name as a subdirectory.
Code should not assume anything about where this suitable location is,
as it will change depending on the system, user, etc.
Thus when reading back written data, use the OutputFileHandler
to find
out where the files are.
Tests should choose output file names so as to minimise the chance of a
conflict with another test.
The best practice is to place all output in a subdirectory named after the
test suite and individual test method, e.g. TestPetscTools_TestRoundRobin
.
Miscellaneous notes
A test suite (.hpp
file) should not rely on any files written by another test suite.
In other words there should be no requisite ordering on the test suites.
Include the following file in each test suite file that uses PETSc, in order to set up PETSc correctly:
Acceptance tests
We use TextTest for these, in order to test the standalone executables.
See Also
CMake Build Guide: How to build and run tests.