This section describes the basic overview of the carputils testing framework and how to create new tests.
carputils
provides the carptests
executable in the bin
subfolder,
while enables the automatic running of sets of tests and the generation of
reports comparing the results against reference solutions.
Note
Consider adding the bin
subfolder of carputils
to your $PATH
environment variable, to make carptests
runnable from anywhere on the
command line.
Running the executable without arguments will by default run all tests in the
module(s) specified in REGRESSION_REF
in settings.yaml
:
carptests
To override this behaviour, specify one or more modules on the command line. For example, to run all mechanics examples in devtests and all tests in benchmarks, run:
carptests devtests.mechanics benchmarks
Tests can also have case-insensitive tags assigned. Tags are defined in the carputils.testing.tag module, and summarised in the automatic documentation. To run, for example, only fast tests taking less than 10 seconds:
carptests --tags fast
On completion of all tests, a report is generated summarising the results. For more details on the command line interface, run:
carptests --help
To generate and store any files needed for testing in the relevant repository,
run carptests
with the --refgen
optional argument. You will probably
want to specify a single example to generate the reference solutions for to
avoid overwriting the entire suite’s reference solutions. For example, to
generate the reference solutions for the mechanics ring example in devtests
(devtests/mechanics/ring), run:
carptests devtests.mechanics.ring --refgen
This will run the test(s) specified in the mechanics ring example and copy the files needed for comparison in the test to the correct location in the reference solution repository. A summary is generated of the run tests, including a list of the generated reference solution files:
===================================================================
SUMMARY
===================================================================
Tests run .................. 1
Reference files generated .. 1
Runtime .................... 0:00:17.049971
Files generated:
/data/carp-test-reference/devtests/mechanics.ring.run/neumann/x.dynpt.gz
====================================================================
If you are satisfied that this generated solution is the ‘correct’ one, that others should compare against when running this test, you need to commit and push this new or modified file. In the above example:
cd /data/carp-test-reference/devtests/mechanics.ring.run/neumann/
git add x.dynpt.gz
git commit -m "Updating the mechanics ring test ref. solution"
git push # To push your commit to bitbucket and share with others
Note
The ‘simple bidomain’ example in the devtests repository (devtests/bidomain/simple) defines some simple tests and can be used as a starting point for new examples with tests.
The testing framework views the tests’ directory structure as a Python package and searches the package hierarchy to find and run tests which have been defined.
Tests are defined by assigning the variable __tests__
in the top level
namespace of a run script, which must be a list of
carputils.testing.Test
objects. This can be placed anywhere in the
source file outside a function or __name__ == '__main__'
block, but it is
suggested to put it immediately before the __name__ == '__main__'
block,
and after the definition of the run
function described in Writing Examples:
from carputils import tools
from carputils.testing import Test
@tools.carpexample()
def run(args, job):
pass # Placeholder
test1 = Test(...)
test2 = Test(...)
__tests__ = [test1, test2]
if __name__ == '__main__':
run()
The carputils.testing.Test
object takes at least two arguments:
Additionally, tests will usually take at least a third optional argument:
So, if you were to normally run a test with the command line:
./run.py --experiment neumann --np 8
You might configure the test:
test1 = Test('neumann_np8', run, ['--experiment', 'neumann', '--np', 8])
The function to run (here just called run
) should be as explained in
Writing Examples:
@tools.carpexample()
def run(args, job):
# Run the simulation
pass # Placeholder
You must also tell the framework how to validate the simulation output against a reference solution. Generally, you will do this by comparing an output file against a reference, and computing some kind of error norm.
The carputils.testing.checks
module defines some functions for comparing
igb files, carputils.testing.max_error()
and
carputils.testing.error_norm()
. These functions calculate a scalar error
value, which the testing framework compares against the test developer’s
preassigned tolerance. A check is added to a test by:
from carputils.testing import max_error
test1.add_filecmp_test('x.dynpt.gz', # Output file to compare
max_error, # Error calc method
0.001) # Error tolerance
These simple error methods should cover most use cases, though you may easily create your own by passing a function that takes the reference filename as the first argument and the temporary test output at runtime as the second argument, and returns an error value.
The error tolerance value is only used to determine test success. For more complex control of test success, you may alternatively pass as the tolerance argument a function which takes the return value of the error function and returns a boolean.
__init__.py
in the
same directory as your test’s run script, and any intermediate directories
from the top level package down.Once your tests are defined, you will need to share any reference simulation
results required for comparison in the tests. To do so, run the carptests
executable with the --refgen
parameter (probably with the module of your
example specified to avoid running all examples) as described in
Generating Reference Solutions.
Tests can optionally set one or more tags, for the purposes of categorisation
of simulations. Standard tags are as described in carputils.testing.tag, but
test developers can also add their own limited-use tags with the add
method:
from carputils.testing import tag
tag.add('my_special_tag')
Tags are assigned to test by supplying a list of tags as the tags
keyword
argument:
test1 = Test('mytest', run, ['--argument'],
tags=[tag.SERIAL, tag.FAST,
tag.MY_SPECIAL_TAG]) # tags are case insensitive
Tests can then be filtered at runtime with the --tags
command line
argument, as described in Running Regression Tests.
If you want to validate multiple tests against the same reference solution,
consider use of the refdir
keyword argument.
By default, reference solutions are stored in the directory:
REGRESSION_REF/package/module/testname
where REGRESSION_REF is replaced by the value set in the carputils
settings.yaml file. For example, for the test neumann
in
devtests/mechanics/ring/run.py, this will be:
REGRESSION_REF/devtests/mechanics.ring.run/neumann
To force a test of a different name to use the same reference solution, use
the refdir
argument to override the last part of the directory path:
test1 = Test('neumann', run, ['--argument'], refdir='other')
# Giving the ref dir REGRESSION_REF/devtests/mechanics.ring.run/other
You will probably want to disable automatic reference generation for all but one of the tests sharing a directory, to be sure which test the calculated reference is from:
test1.disable_reference_generation()
A key application of this is for running a number of similar tests, for example the same simulation with different numbers of CPUs:
__tests__ = []
for np in [1, 2, 4, 8, 16]:
argv = ['--experiment', 'neumann', '--np', np]
test = Test('neumann_np{0}'.format(np), run, argv, refdir='neumann')
if np != 1:
# Only generate reference for --np 1
test.disable_reference_generation()
__tests__.append(test)