-
-
Notifications
You must be signed in to change notification settings - Fork 346
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pytest transition #1793
base: main
Are you sure you want to change the base?
Pytest transition #1793
Conversation
…xunit parsing that was causing issues with generic setup() methods in classes for pytest
…hould have been setup_method
…od because it was getting parsed as a nose hook
Thanks for starting this Chris. If we're considering this change, I think we should also use native pytest fixtures instead of class initializers for test setup. |
|
I started down that path, but got mixed up a bit and overwhelmed with the amount of changes, so I reverted back to the standard class approach while I'm updating all of the tests. After that I will try to see if I can make things more idiomatic to pytest. |
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #1793 +/- ##
==========================================
- Coverage 73.24% 73.23% -0.01%
==========================================
Files 381 381
Lines 54375 54375
Branches 9253 9254 +1
==========================================
- Hits 39826 39823 -3
- Misses 11578 11581 +3
Partials 2971 2971 ☔ View full report in Codecov by Sentry. |
I may regret these words, but so far the fixtures haven't been too bad with regards to replacing the standard base class/derived class logic and the older setup and teardown class methods. I haven't pushed the changed yet until I finish converting one of the larger files (I start with the smallest tests files and incrementally move towards the larger ones, making sure the tests for each file execute the same). |
61d1e22
to
9b06dad
Compare
In the test_purefluid.py, I adjusted the tolerance factor from 50 to 70 in the test_pressure test as there seems to be some difference in the equation used in the assertNear and the pytest approx() function. The test was failing using the pytest.approx as it was comparing something like 36,000 += 30,000 to 68,900, which passed before. |
@speth I converted the test_composite.py file away from the assertArrayNear() and used the approx() in its place. I know @bryanwweber had suggested using the assert_allclose(). I just wanted to double check on what should be done before I go through all the tests. The following script generates the output for the two different approaches.
approx() ouptut:
assert_allclose() output:
|
Interesting, that's not at all what I get for the
While the output is fairly similar, what I like about the What version of |
I must have an old version on my ubuntu VM, because on my other machine using pytest version 8.2.2, I see the output you've shown in your example. So the consensus is that approx() should be what replaces assertArrayNear() then? |
That seems good to me! |
I think I've got most of the suggestions implemented. I'm still perplexed about the failing tests on Github though. |
The tests are failing because the |
…data_path fixtures
…ll test classes to pytest-style tests
@pytest.fixture(scope="session", autouse=True) | ||
def cantera_setup(): | ||
""" | ||
Fixture to set up Cantera environment for the entire test session. | ||
""" | ||
# Add data directories | ||
cantera.add_directory(TEST_DATA_PATH) | ||
cantera.add_directory(CANTERA_DATA_PATH) | ||
cantera.print_stack_trace_on_segfault() | ||
cantera.CanteraError.set_stack_trace_depth(20) | ||
cantera.make_deprecation_warnings_fatal() | ||
|
||
# Yield control to tests | ||
yield |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I may be wrong about what's idiomatic for pytest, but it seems like the simplest thing to do would be to execute these functions directly at the module level, as done in the old utilities.py
. The use case for the session-level fixture seems like it would be if it created a singleton object of some sort to actually be used in other some other tests, but here you just want the side-effects that affect the global module state.
This should fix the CI failures. I think the reason you don't see them locally is if you're running the tests through SCons, it's already adding test/data
to the cantera data path. The CI runs pytest directly, so has to rely on this for the path update.
Updating the python unit tests to use pytest uniformly in all the tests. Ray had mentioned at one point that the pytest transition was going a bit slow, so I thought I'd give it a shot. I still have about 3 test files left, and they are quite large.
Checklist
scons build
&scons test
) and unit tests address code coverage