Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pytest transition #1793

Open
wants to merge 82 commits into
base: main
Choose a base branch
from
Open

Pytest transition #1793

wants to merge 82 commits into from

Conversation

wandadars
Copy link
Contributor

@wandadars wandadars commented Sep 22, 2024

Updating the python unit tests to use pytest uniformly in all the tests. Ray had mentioned at one point that the pytest transition was going a bit slow, so I thought I'd give it a shot. I still have about 3 test files left, and they are quite large.

Checklist

  • The pull request includes a clear description of this code change
  • Commit messages have short titles and reference relevant issues
  • Build passes (scons build & scons test) and unit tests address code coverage
  • Style & formatting of contributed code follows contributing guidelines
  • The pull request is ready for review

@bryanwweber
Copy link
Member

Thanks for starting this Chris. If we're considering this change, I think we should also use native pytest fixtures instead of class initializers for test setup.

@ischoegl
Copy link
Member

Thanks for starting this Chris. If we're considering this change, I think we should also use native pytest fixtures instead of class initializers for test setup.

pytest fixtures involve a shift in paradigms that would require a complete rewrite of much of our test suite (at least based on my understanding). While there are advantages, I don’t think that this is the best investment of developer time, but YMMV. So I am personally 👍 on this PR as it takes a step in the right direction.

@wandadars
Copy link
Contributor Author

Thanks for starting this Chris. If we're considering this change, I think we should also use native pytest fixtures instead of class initializers for test setup.

I started down that path, but got mixed up a bit and overwhelmed with the amount of changes, so I reverted back to the standard class approach while I'm updating all of the tests. After that I will try to see if I can make things more idiomatic to pytest.

Copy link

codecov bot commented Sep 23, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 73.23%. Comparing base (3f7d0ff) to head (f905236).
Report is 7 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #1793      +/-   ##
==========================================
- Coverage   73.24%   73.23%   -0.01%     
==========================================
  Files         381      381              
  Lines       54375    54375              
  Branches     9253     9254       +1     
==========================================
- Hits        39826    39823       -3     
- Misses      11578    11581       +3     
  Partials     2971     2971              

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@wandadars
Copy link
Contributor Author

wandadars commented Sep 24, 2024

Thanks for starting this Chris. If we're considering this change, I think we should also use native pytest fixtures instead of class initializers for test setup.

pytest fixtures involve a shift in paradigms that would require a complete rewrite of much of our test suite (at least based on my understanding). While there are advantages, I don’t think that this is the best investment of developer time, but YMMV. So I am personally 👍 on this PR as it takes a step in the right direction.

I may regret these words, but so far the fixtures haven't been too bad with regards to replacing the standard base class/derived class logic and the older setup and teardown class methods. I haven't pushed the changed yet until I finish converting one of the larger files (I start with the smallest tests files and incrementally move towards the larger ones, making sure the tests for each file execute the same).

@wandadars
Copy link
Contributor Author

In the test_purefluid.py, I adjusted the tolerance factor from 50 to 70 in the test_pressure test as there seems to be some difference in the equation used in the assertNear and the pytest approx() function. The test was failing using the pytest.approx as it was comparing something like 36,000 += 30,000 to 68,900, which passed before.

@wandadars
Copy link
Contributor Author

@speth I converted the test_composite.py file away from the assertArrayNear() and used the approx() in its place. I know @bryanwweber had suggested using the assert_allclose(). I just wanted to double check on what should be done before I go through all the tests.

The following script generates the output for the two different approaches.

import numpy as np
import pytest
import numpy.testing as npt

# Define two arrays for comparison
A = np.array([1.0, 2.0, 3.0])
B = np.array([1.0, 2.0001, 3.0002])

def test_compare_with_pytest_approx():
    # Use pytest.approx() directly on the NumPy arrays
    assert A == pytest.approx(B, rel=1e-6)

def test_compare_with_assert_allclose():
    # Use numpy.testing.assert_allclose to compare the arrays
    npt.assert_allclose(A, B, rtol=1e-6)

approx() ouptut:

E   assert array([1., 2., 3.]) == approx([1.0 ± 1.0e-06, 2.0001 ± 2.0e-06, 3.0002 ± 3.0e-06])
E    +  where approx([1.0 ± 1.0e-06, 2.0001 ± 2.0e-06, 3.0002 ± 3.0e-06]) = <function approx at 0x70b3e1810820>(array([1.    , 2.0001, 3.0002]), rel=1e-06)
E    +    where <function approx at 0x70b3e1810820> = pytest.approx

assert_allclose() output:

E   AssertionError: 
E   Not equal to tolerance rtol=1e-06, atol=0
E   
E   Mismatched elements: 2 / 3 (66.7%)
E   Max absolute difference: 0.0002
E   Max relative difference: 6.66622225e-05
E    x: array([1., 2., 3.])
E    y: array([1.    , 2.0001, 3.0002])

@speth
Copy link
Member

speth commented Oct 2, 2024

Interesting, that's not at all what I get for the approx case. Using pytest 8.0.1, I get:

    def test_compare_with_pytest_approx():
        # Use pytest.approx() directly on the NumPy arrays
>       assert A == pytest.approx(B, rel=1e-6)
E       assert array([1., 2., 3.]) == approx([1.0 ±...02 ± 3.0e-06])
E         
E         comparison failed. Mismatched elements: 2 / 3:
E         Max absolute difference: 0.00019999999999997797
E         Max relative difference: 6.666666666665932e-05
E         Index | Obtained | Expected        
E         (1,)  | 2.0      | 2.0001 ± 2.0e-06
E         (2,)  | 3.0      | 3.0002 ± 3.0e-06

test_approx.py:11: AssertionError

While the output is fairly similar, what I like about the approx output is that it shows just the specific failing elements, which is helpful when the arrays are larger and can't be displayed in full (or where displaying them in full isn't particularly helpful).

What version of pytest are you using?

@wandadars
Copy link
Contributor Author

I must have an old version on my ubuntu VM, because on my other machine using pytest version 8.2.2, I see the output you've shown in your example. So the consensus is that approx() should be what replaces assertArrayNear() then?

@bryanwweber
Copy link
Member

So the consensus is that approx() should be what replaces assertArrayNear() then?

That seems good to me!

@wandadars
Copy link
Contributor Author

I think I've got most of the suggestions implemented. I'm still perplexed about the failing tests on Github though.

@bryanwweber
Copy link
Member

The tests are failing because the test/data path is not on the search path for input files. I'm guessing it's something to do with your path changes for those tests.

Comment on lines +50 to +63
@pytest.fixture(scope="session", autouse=True)
def cantera_setup():
"""
Fixture to set up Cantera environment for the entire test session.
"""
# Add data directories
cantera.add_directory(TEST_DATA_PATH)
cantera.add_directory(CANTERA_DATA_PATH)
cantera.print_stack_trace_on_segfault()
cantera.CanteraError.set_stack_trace_depth(20)
cantera.make_deprecation_warnings_fatal()

# Yield control to tests
yield
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I may be wrong about what's idiomatic for pytest, but it seems like the simplest thing to do would be to execute these functions directly at the module level, as done in the old utilities.py. The use case for the session-level fixture seems like it would be if it created a singleton object of some sort to actually be used in other some other tests, but here you just want the side-effects that affect the global module state.

This should fix the CI failures. I think the reason you don't see them locally is if you're running the tests through SCons, it's already adding test/data to the cantera data path. The CI runs pytest directly, so has to rely on this for the path update.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants