-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refactor test for comparing output, do not show plots while testing, no default behavior modified #32
Conversation
Warning! No news item is found for this PR. If this is a user-facing change/feature/fix, |
def test_nmf_mapping_code(args, output_dir, tmpdir): | ||
|
||
# Save the result in ("nmf_result") at the top project level (default behavior) | ||
main(args=args) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
tl;dr
This main(args=args)
creates 3 .json files under nmf_result
at the top directly level, which is the default behavior. I then copy these outputs to tmpdir
and then compare those three json files under tmpdir
against the expected .json files under tests/output/output1
, tests/output/output2
, and tests/output/output3
.
The three 3 json files are:
json_files_to_check = [
"component_index_vs_pratio_col.json",
"component_index_vs_RE_value.json",
"x_index_vs_y_col_components.json",
]
@sbillinge ready for your insight. Thank you. |
In general, my impulse here is to junk this test. In the spirit of the discussion around the test for Qmin Qmax in This seems to fall into the trap that the test tests the code, not the behavior. The right way to discuss this is to address the question of what behavior is this function being tested supposed to be implementing? Then let's write a test that captures the different behaviors we want this to have under different situations. |
@sbillinge You are right. This was a refactoring task. So we are still just trying to satisfy our test functions that simply run the code, instead of writing tests to capture the behavior that WE want.
Yes, I probably need to re-visit this code after the Fall 2024 tasks. Or do we have a member who could take over this task? |
Closing: We want to write tests that do not simply fix or refactor the existing test functions that run main and assert the output content. Instead, we aim to test the expected behavior that we want our code to produce |
Context
No code was modified under
src
tests/test_NMF_analysis_code.py
which runsmain(arg)
with args being passed.Here is the discussion that requires your input: To make the tests pass locally (3 cases), I updated the expected .json outputs (since it's integrated with the cloud, didn't want to change anything under
src
)Purpose of this PR
However, it still fails via GitHub Actions, most likely due to convergence problem as displayed in the warnings below or random seed. But, the merit of this PR could be refactoring the test function so that it's easier to debug down the road.
Local test: