-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Report correct outcome in the replay file (#64) #65
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you.
That really did go unnoticed =)
Just one thing. When the "main" test pass but there is an error in a fixture teardown the recorded outcome will be "passed".
src/pytest_replay/__init__.py
Outdated
if result.when == "call": | ||
self.node_outcome[item.nodeid] = result.outcome | ||
elif result.when == "teardown": |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if result.when == "call": | |
self.node_outcome[item.nodeid] = result.outcome | |
elif result.when == "teardown": | |
self.node_outcome.setdefault(item.nodeid, result.outcome) | |
if result.failed: | |
self.node_outcome[item.nodeid] = result.outcome | |
if result.when == "teardown": |
This way all phases' outcomes are considered, otherwise a test could be flagged as "passed" even when a fixture teardown failed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if result.when == "call": | |
self.node_outcome[item.nodeid] = result.outcome | |
elif result.when == "teardown": | |
self.node_outcome.setdefault(item.nodeid, result.outcome) | |
if not result.passed: | |
self.node_outcome[item.nodeid] = result.outcome | |
if result.when == "teardown": |
BTW, "if not passed" is probably better. Given there is 3 possible outcomes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this would not account for this case though:
import pytest
@pytest.fixture()
def foo():
yield
pytest.skip("skipping")
def test(foo):
assert False
This would overwrite "failed" with "skipped", while I guess the test should be considered failed.
tests/test_replay.py
Outdated
testdir.makepyfile( | ||
test_module=""" | ||
def test_success(): | ||
pass | ||
|
||
def test_failure(): | ||
assert False | ||
""" | ||
) | ||
dir = testdir.tmpdir / "replay" | ||
result = testdir.runpytest_subprocess(f"--replay-record-dir={dir}") | ||
assert result.ret != 0 | ||
|
||
contents = [json.loads(s) for s in (dir / ".pytest-replay.txt").read().splitlines()] | ||
assert len(contents) == 4 | ||
|
||
assert "test_success" in contents[1]["nodeid"] | ||
assert contents[1]["outcome"] == "passed" | ||
|
||
assert "test_failure" in contents[3]["nodeid"] | ||
assert contents[3]["outcome"] == "failed" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
testdir.makepyfile( | |
test_module=""" | |
def test_success(): | |
pass | |
def test_failure(): | |
assert False | |
""" | |
) | |
dir = testdir.tmpdir / "replay" | |
result = testdir.runpytest_subprocess(f"--replay-record-dir={dir}") | |
assert result.ret != 0 | |
contents = [json.loads(s) for s in (dir / ".pytest-replay.txt").read().splitlines()] | |
assert len(contents) == 4 | |
assert "test_success" in contents[1]["nodeid"] | |
assert contents[1]["outcome"] == "passed" | |
assert "test_failure" in contents[3]["nodeid"] | |
assert contents[3]["outcome"] == "failed" | |
testdir.makepyfile( | |
test_module=""" | |
import pytest | |
def test_success(): | |
pass | |
def test_failure(): | |
assert False | |
@pytest.fixture | |
def some_fixture(): | |
yield | |
assert False | |
def test_failure_fixture_teardown(some_fixture): | |
assert True | |
""" | |
) | |
dir = testdir.tmpdir / "replay" | |
result = testdir.runpytest_subprocess(f"--replay-record-dir={dir}") | |
assert result.ret != 0 | |
contents = [json.loads(s) for s in (dir / ".pytest-replay.txt").read().splitlines()] | |
outcomes = {r["nodeid"]: r["outcome"] for r in contents if "outcome" in r} | |
assert outcomes == { | |
"test_module.py::test_success": "passed", | |
"test_module.py::test_failure": "failed", | |
"test_module.py::test_failure_fixture_teardown": "failed", | |
} |
Updating to test my previous suggestion.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this misses a test for a setup failure, adding it
I kinda assumed the user is mainly interested in the actual test outcome, but probably your guess is better. |
@prusse-martin I've updated both code and tests to consider also the setup phase and to give priority to |
Thank you for fixing this. |
Fixes #64
Added both tests for sequential and parallel usages.