Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Migrate tests to unittest framework #196

Open
wants to merge 34 commits into
base: master
Choose a base branch
from

Conversation

jmichel-otb
Copy link
Member

Summary

This PR migrates all our tests to a real unit test framework. I choosed unittest as it is the standard python framework, requiring no additional dependencies.

Implementation details:

  • testdata moved to tests folder
  • s2p_test.pyhas been moved to tests folder, rewritten with unittest and split into different test file (one per module)
  • travis-ci' and test makefile target have been updated to run unittest with test discovery (and coverage for travis-ci)

Running tests

All those command can be run from the s2p root folder.

All tests

$ make test

or

$  python -m unittest discover -s tests -p "*_test.py"

A specific test or group of tests

All tests in sift_test.py

$ python -m unittest tests.sift_test

All tests in TestSifts class in sift_test.py

$ python -m unittest tests.sift_test.testSifts

A specific test in in TestSifts class in sift_test.py

$ python -m unittest tests.sift_test.testSifts.test_image_keypoints

Adding tests

Edit or create the corresponding test module for the module to be tested.

Coverage

As a cherry icing on the cake, I added coverage report to travis:

----------------------------------------------------------------------
Ran 10 tests in 334.351s
OK
Name                          Stmts   Miss  Cover   Missing
-----------------------------------------------------------
s2plib/__init__.py                0      0   100%
s2plib/block_matching.py        115    109     5%   13-16, 35-249
s2plib/common.py                168     96    43%   34-35, 89-92, 105, 113-116, 129-134, 156-159, 175-181, 202-205, 219-222, 237-252, 259-262, 279-295, 314-333, 348-358, 377-388, 392-402
s2plib/config.py                 45      0   100%
s2plib/estimation.py            105     78    26%   23-41, 48-51, 69-117, 136-160, 181-182, 227-233
s2plib/evaluation.py             10      7    30%   23-32
s2plib/fusion.py                 36     25    31%   21-24, 40-73
s2plib/geographiclib.py          10      7    30%   26-34
s2plib/initialization.py        173     33    81%   29-30, 48-49, 53-54, 56-59, 63-64, 67-82, 129, 230, 305-316
s2plib/masking.py                36     26    28%   34-69, 81-82
s2plib/parallel.py               91     34    63%   37-55, 93-94, 99-103, 135, 141-150
s2plib/pointing_accuracy.py      63     35    44%   40-62, 84-101, 126-136, 172, 174, 177
s2plib/rectification.py         162    144    11%   34-43, 62-79, 105-127, 148-162, 189-251, 274-290, 321-401
s2plib/rpc_model.py             338    212    37%   66-87, 94-98, 135, 138-141, 144-167, 177-182, 238-264, 281, 291, 312-379, 389-432, 442-472, 482-505, 516-521, 525, 571-578
s2plib/rpc_utils.py             215    154    28%   34-36, 54-92, 158-220, 252-255, 296-312, 319-348, 424-440, 485-498, 523-525, 549-557, 564-592
s2plib/sift.py                   51     24    53%   35, 76, 78-81, 87, 89, 92-94, 119-142
s2plib/test_estimation.py        43     43     0%   6-58
s2plib/test_evaluation.py        21     21     0%   6-28
s2plib/triangulation.py          54     44    19%   26-32, 56-63, 91-96, 114-126, 140-153, 178-194
s2plib/viewGL.py                116    116     0%   5-208
s2plib/visualisation.py          96     82    15%   35-54, 71-118, 141-203
-----------------------------------------------------------
TOTAL                          1948   1290    34%

@jmichel-otb
Copy link
Member Author

jmichel-otb commented Mar 1, 2019

Incidentally, I think we should take a close look at the coverage report. Python code in s2plib which is never called during our end 2end tests is likely to be dead.

A detailed html report can be generated afther tests execution with

coverage report html

@jmichel-otb
Copy link
Member Author

Edits:

@carlodef
Copy link
Contributor

@jmichel-otb Right now it seems that running a specific test or group of tests doesn't work:

python -m unittest tests.sift_test

raises ImportError: Failed to import test module: sift_test

@carlodef
Copy link
Contributor

carlodef commented Mar 13, 2019

Actually I find the unittest framework heavy and hard to read because of all the boilerplate code. Would you mind if we use pytest instead?

@jmichel-otb
Copy link
Member Author

I apologize in advance because it is going to be a long answer to a short question. I tried to summarize it at the end.

Disclaimer: I an mot a python test framewok expert, this is my personal experience so far speaking.

Rational for choosing unitest over pytest

I know that pytest is the current hip of python developers, mostly because it is so concise (you do not even need to import anything). I also use it in another project. Nevertheless, I chose unittest on purpose, for the following reasons.

unittest is standard python

pytest is yet another dependency that you need to pip install, while unittest is standard python. This may not be a big deal, except that unittest is likely to be available in future python versions until the end of time (or close), whereas pytest might be discontinued in the future when another hipster test framework shows up (I heard there is one already starting to get attention, I can not remember its name).

pytest discovers tests ... or not

Since there are no import or classes or anything, pytest relies on an internal pytest mechanism to discover tests. It is great ... until for some reason it does not find some of your test. Then you need to get into pytest internal logic to try to understand why (import error ? test name ? error in the code ? problem with path ?). With unittest you can explicitly ask to run one test, in one class, in one module. in one package. I it can not run it the error will be much more explicit.

pytest relies on fixtures

In the unittest refactor, I had to define a base class with setUp() and tearDown() methods to reset s2p configuration between tests (to be able to run two s2p config in the same python session without side effects). Those methods are called before and after each test (you can also define methods to be called once at the beginning / end of test session). This is class OO programming, we are specializing a service offered by unittest base class. How would you do that in pytest ? You need to use fixtures, and this is how it looks like:

@pytest.fixture(scope='module')
def reset_s2p_conf():
    # reset

def test_end2end(reset_s2_conf):
  # the actual test

While it is really concise, there are a few reasons why I really do not like this:

  • the @pytest.fixture is pytest specific. If you wrote a complex test suite with this and need to change later on, you will have to rewrite everything.
  • It does not look like standard, easy to read python code to me (passing a function name in the def of another function ? really ?).
  • You have to know pytest logic to understand how it works, whereas for unittest you have to understand OOP.

Of course, we only need those setup methods because of the poor design of config.py, I will get to that later.

Boilerplate code ?

This depends on what is boilerplate for you. I will test a few hypothesis.

Boilerplate == class per se

If the problem is using classes no matter what we do with them, there is not much to say, except that I disagree and that we will have an hard time agreeing on any major redesign of s2p.

I know that s2p use has grown beyond the initial scope, and we are all (including me) responsible for its design, but for me the main flaw of this design, which makes it so difficult to debug and extend, is pretending to write functional code while delegating modeling and state to a mix of global variables, files and dictionaries (sometimes dictionaries of dictionaries).

For instance, the problem of running two s2p configurations in the same python section could be easily solved by turning config.py from a bunch of unrelated global variables to a class, for which we could build and run several instances (even in parallel). Not to mention that this class could take care of verifying parameters at init time (types, bounds, do files exists, missing values ...).

In current implementation, unittest classes do not had much

I agree on this. But remember that this is only a port from our own test framework to unittest, not a refactoring of the test. Using classes allows for a finer design of tests. For instance in the sift case, you can call the intensive method (getting sift points) in init, store the result as a class member and then define a set of shorter, more focused test methods:

  • assert descriptor length,
  • assert descriptor bounds,
  • assert number of points ...

This way, when a specific test is failing, it gives you at glance a more detailed information of what happened (instead of "something is wrong with sift, run the test and read the logs").

Of course you can do that with pytest ... by adding more fixtures. I am not sure of what boilerplate is, but a spaghetti code of tests and fixtures scares me.

It is cumbersome to run a specific test with unittest command-line tool

I can agree on this. Luckily, pytest command-line tool can run unittest tests ... And it has a regex option to filter which tests to run. Feel free to use it instead of unittest command-line tool.

TLDR;

  • I choosed unittest over pytest on purpose. If you need to know why, you will have to read !
  • You can use pytest launcher a its regex ability if you prefer. It will be easier to launch inidividual tests.
  • After reading this, if you still want to move to pytest anyway, fine ... But I already ported tests once, so show me the code! Also, we should rather spend time on writing new tests ...
  • The right move would be to turn config.py into a class, so that we could simplify test logic, in which case code would be less boilerplate (in pytest or unittest).

@carlodef
Copy link
Contributor

carlodef commented Apr 10, 2019

Thank you @jmichel-otb for this detailed answer. I propose not to use any framework, neither unittest nor pytest. pytest can be used as a convenience tool to run the tests and get a detailed summary on what happened but it's not a dependence. I've just pushed a few modifications of your tests to implement this. Please let us know if you agree with these changes.

As far as I understand, the main reasons for using a test framework are:

  • some tests need to make sure that the tmp directory exists before running,
  • some tests need to reset the config.cfg dictionary to its default values before running.

The first point will be solved once we have removed all intermediate i/o disk operations. A first step in this direction would be to merge PR #202 and PR #197, which together allow to run the first step of the pipeline (pointing correction) without writing files.

The second point could be solved by turning the config.cfg dictionary into a class, as you propose.

@carlodef carlodef requested a review from dyoussef April 11, 2019 13:46
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants