Until the 1.15 release, NumPy used the nose testing framework, it now uses the pytest framework. The older framework is still maintained in order to support downstream projects that use the old numpy framework, but all tests for NumPy should use pytest.
Our goal is that every module and package in SciPy and NumPy should have a thorough set of unit tests. These tests should exercise the full functionality of a given routine as well as its robustness to erroneous or unexpected input arguments. Long experience has shown that by far the best time to write the tests is before you write or change the code - this is test-driven development. The arguments for this can sound rather abstract, but we can assure you that you will find that writing the tests first leads to more robust and better designed code. Well-designed tests with good coverage make an enormous difference to the ease of refactoring. Whenever a new bug is found in a routine, you should write a new test for that specific case and add it to the test suite to prevent that bug from creeping back in unnoticed.
To run SciPy’s full test suite, use the following:
>>> import scipy >>> scipy.test()
or from the command line:
$ python runtests.py
SciPy uses the testing framework from NumPy (specifically Test Support (numpy.testing)), so all the SciPy examples shown here are also applicable to NumPy. NumPy’s full test suite can be run as follows:
>>> import numpy >>> numpy.test()
The test method may take two or more arguments; the first,
label is a
string specifying what should be tested and the second,
verbose is an
integer giving the level of output verbosity. See the docstring for
numpy.test for details. The default value for
label is ‘fast’ - which
will run the standard tests. The string ‘full’ will run the full battery
of tests, including those identified as being slow to run. If
is 1 or less, the tests will just show information messages about the tests
that are run; but if it is greater than 1, then the tests will also provide
warnings on missing tests. So if you want to run every test and get
messages about which modules don’t have tests:
>>> scipy.test(label='full', verbose=2) # or scipy.test('full', 2)
Finally, if you are only interested in testing a subset of SciPy, for
integrate module, use the following:
or from the command line:
$python runtests.py -t scipy/integrate/tests
The rest of this page will give you a basic idea of how to add unit tests to modules in SciPy. It is extremely important for us to have extensive unit testing since this code is going to be used by scientists and researchers and is being developed by a large number of people spread across the world. So, if you are writing a package that you’d like to become part of SciPy, please write the tests as you develop the package. Also since much of SciPy is legacy code that was originally written without unit tests, there are still several modules that don’t have tests yet. Please feel free to choose one of these modules and develop tests for it as you read through this introduction.
Writing your own tests¶
Every Python module, extension module, or subpackage in the SciPy
package directory should have a corresponding
Pytest examines these files for test methods (named test*) and test
classes (named Test*).
Suppose you have a SciPy module
scipy/xxx/yyy.py containing a
zzz(). To test this function you would create a test
test_yyy.py. If you only need to test one aspect of
zzz, you can simply add a test function:
def test_zzz(): assert_(zzz() == 'Hello from zzz')
More often, we need to group a number of tests together, so we create a test class:
from numpy.testing import assert_, assert_raises # import xxx symbols from scipy.xxx.yyy import zzz class TestZzz: def test_simple(self): assert_(zzz() == 'Hello from zzz') def test_invalid_parameter(self): assert_raises(...)
Within these test methods,
assert_() and related functions are used to test
whether a certain assumption is valid. If the assertion fails, the test fails.
Note that the Python builtin
assert should not be used, because it is
stripped during compilation with
test_ functions or methods should not have a docstring, because
that makes it hard to identify the test from the output of running the test
verbose=2 (or similar verbosity setting). Use plain comments
#) if necessary.
Sometimes it is convenient to run
test_yyy.py by itself, so we add
if __name__ == "__main__": run_module_suite()
at the bottom.
As an alternative to
pytest.mark.<label>, there are a number of labels you
Unlabeled tests like the ones above are run in the default
scipy.test() run. If you want to label your test as slow - and
therefore reserved for a full
scipy.test(label='full') run, you
can label it with a decorator:
# numpy.testing module includes 'import decorators as dec' from numpy.testing import dec, assert_ @dec.slow def test_big(self): print 'Big, slow test'
Similarly for methods:
class test_zzz: @dec.slow def test_simple(self): assert_(zzz() == 'Hello from zzz')
Available labels are:
slow: marks a test as taking a long time
setastest(tf): work-around for test discovery when the test name is non conformant
skipif(condition, msg=None): skips the test when
knownfailureif(fail_cond, msg=None): will avoid running the test if
True, useful for tests that conditionally segfault
deprecated(conditional=True): filters deprecation warnings emitted in the test
paramaterize(var, input): an alternative to pytest.mark.paramaterized
Easier setup and teardown functions / methods¶
Testing looks for module-level or class-level setup and teardown functions by name; thus:
def setup(): """Module-level setup""" print 'doing setup' def teardown(): """Module-level teardown""" print 'doing teardown' class TestMe(object): def setup(): """Class-level setup""" print 'doing setup' def teardown(): """Class-level teardown""" print 'doing teardown'
Setup and teardown functions to functions and methods are known as “fixtures”, and their use is not encouraged.
One very nice feature of testing is allowing easy testing across a range
of parameters - a nasty problem for standard unit tests. Use the
Doctests are a convenient way of documenting the behavior of a function and allowing that behavior to be tested at the same time. The output of an interactive Python session can be included in the docstring of a function, and the test framework can run the example and compare the actual output to the expected output.
The doctests can be run by adding the
doctests argument to the
test() call; for example, to run all tests (including doctests)
>>> import numpy as np >>> np.lib.test(doctests=True)
The doctests are run as if they are in a fresh Python instance which
import numpy as np. Tests that are part of a SciPy
subpackage will have that subpackage already imported. E.g. for a test
scipy/linalg/tests/, the namespace will be created such that
from scipy import linalg has already executed.
Rather than keeping the code and the tests in the same directory, we
put all the tests for a given subpackage in a
subdirectory. For our example, if it doesn’t already exist you will
need to create a
tests/ directory in
scipy/xxx/. So the path
scipy/xxx/tests/test_yyy.py is written, its possible to
run the tests by going to the
tests/ directory and typing:
Or if you add
scipy/xxx/tests/ to the Python path, you could run
the tests interactively in the interpreter like this:
>>> import test_yyy >>> test_yyy.test()
Usually, however, adding the
tests/ directory to the python path
isn’t desirable. Instead it would better to invoke the test straight
from the module
xxx. To this end, simply place the following lines
at the end of your package’s
... def test(level=1, verbosity=1): from numpy.testing import Tester return Tester().test(level, verbosity)
You will also need to add the tests directory in the configuration section of your setup.py:
... def configuration(parent_package='', top_path=None): ... config.add_data_dir('tests') return config ...
Now you can do the following to test your module:
>>> import scipy >>> scipy.xxx.test()
Also, when invoking the entire SciPy test suite, your tests will be found and run:
>>> import scipy >>> scipy.test() # your tests are included and run automatically!
Tips & Tricks¶
Creating many similar tests¶
If you have a collection of tests that must be run multiple times with minor variations, it can be helpful to create a base class containing all the common tests, and then create a subclass for each variation. Several examples of this technique exist in NumPy; below are excerpts from one in numpy/linalg/tests/test_linalg.py:
class LinalgTestCase: def test_single(self): a = array([[1.,2.], [3.,4.]], dtype=single) b = array([2., 1.], dtype=single) self.do(a, b) def test_double(self): a = array([[1.,2.], [3.,4.]], dtype=double) b = array([2., 1.], dtype=double) self.do(a, b) ... class TestSolve(LinalgTestCase): def do(self, a, b): x = linalg.solve(a, b) assert_almost_equal(b, dot(a, x)) assert_(imply(isinstance(b, matrix), isinstance(x, matrix))) class TestInv(LinalgTestCase): def do(self, a, b): a_inv = linalg.inv(a) assert_almost_equal(dot(a, a_inv), identity(asarray(a).shape)) assert_(imply(isinstance(a, matrix), isinstance(a_inv, matrix)))
In this case, we wanted to test solving a linear algebra problem using
matrices of several data types, using
linalg.inv. The common test cases (for single-precision,
double-precision, etc. matrices) are collected in
Known failures & skipping tests¶
Sometimes you might want to skip a test or mark it as a known failure, such as when the test suite is being written before the code it’s meant to test, or if a test only fails on a particular architecture. The decorators from numpy.testing.dec can be used to do this.
To skip a test, simply use
from numpy.testing import dec @dec.skipif(SkipMyTest, "Skipping this test because...") def test_something(foo): ...
The test is marked as skipped if
SkipMyTest evaluates to nonzero,
and the message in verbose test output is the second argument given to
skipif. Similarly, a test can be marked as a known failure by
from numpy.testing import dec @dec.knownfailureif(MyTestFails, "This test is known to fail because...") def test_something_else(foo): ...
Of course, a test can be unconditionally skipped or marked as a known
failure by passing
True as the first argument to
A total of the number of skipped and known failing tests is displayed
at the end of the test run. Skipped tests are marked as
the test results (or
verbose > 1), and known
failing tests are marked as
Tests on random data¶
Tests on random data are good, but since test failures are meant to expose
new bugs or regressions, a test that passes most of the time but fails
occasionally with no code changes is not helpful. Make the random data
deterministic by setting the random number seed before generating it. Use
random.seed(some_number) or NumPy’s
numpy.random.seed(some_number), depending on the source of random numbers.