Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added code coverage tracking to github action #378

Open
wants to merge 9 commits into
base: devel
Choose a base branch
from

Conversation

caleb-sitton-inl
Copy link
Collaborator


Pull Request Description

What issue does this change request address?

#376

What are the significant changes in functionality due to this change request?

This request replaces the call to run_tests in the part of the github action running on linux with a call to check_py_coverage.sh. This does not impact the functionality of the tests being run in any way. It provides a brief message with an easily visible percentage for code coverage when the action is complete. This message also directs users to a thorough html report on coverage available under the 'Artifacts' section.

The disadvantage of this addition is the overhead run time whenever the github action is activated. Running locally, checking coverage adds about 19% overhead relative to plain run_tests. Since coverage is only being run for one of the two calls to run_tests in this github action, the impact of the overhead is halved to 9-10% (considering only time spent executing tests, not time for setup and other activities the github action executes). While it does save time, only checking coverage for a single OS would not capture deviations in coverage caused by different scripts being run per the OS. An example of this is the selection of a pyomo solver, which can depend on the OS platform running the script.


For Change Control Board: Change Request Review

The following review must be completed by an authorized member of the Change Control Board.

  • 1. Review all computer code.
  • 2. If any changes occur to the input syntax, there must be an accompanying change to the user manual and xsd schema. If the input syntax change deprecates existing input files, a conversion script needs to be added (see Conversion Scripts).
  • 3. Make sure the Python code and commenting standards are respected (camelBack, etc.) - See on the wiki for details.
  • 4. Automated Tests should pass.
  • 5. If significant functionality is added, there must be tests added to check this. Tests should cover all possible options. Multiple short tests are preferred over one large tes.
  • 6. If the change modifies or adds a requirement or a requirement based test case, the Change Control Board's Chair or designee also needs to approve the change. The requirements and the requirements test shall be in sync.
  • 7. The merge request must reference an issue. If the issue is closed, the issue close checklist shall be done.
  • 8. If an analytic test is changed/added, the the analytic documentation must be updated/added.
  • 9. If any test used as a basis for documentation examples have been changed, the associated documentation must be reviewed and assured the text matches the example.

@caleb-sitton-inl caleb-sitton-inl added the priority-normal tasks that may be time sensitive, but are not necessarily critical label Aug 2, 2024
@caleb-sitton-inl caleb-sitton-inl added the Do Not Merge PR is not ready for merging label Sep 2, 2024
@moosebuild
Copy link
Collaborator

Job CentOS 8 on ce3d555 : invalidated by @GabrielSoto-INL

@moosebuild
Copy link
Collaborator

Job CentOS 8 on ce3d555 : invalidated by @GabrielSoto-INL

problem with cloning, trying again?

@GabrielSoto-INL
Copy link
Collaborator

Just a general comment for clarification: seems there are two ways of accessing the coverage, correct?

  1. download the "Artifacts" left over from the coverage run in the "Checks" tab or
  2. navigate here and scroll to the bottom of the page:

Also, an additional note that after this PR is merged, @caleb-sitton-inl it may be good to summarize this in a Discussion for developers in this repo

Copy link
Collaborator

@GabrielSoto-INL GabrielSoto-INL left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@caleb-sitton-inl and I have iterated on this previously, have left a couple more clarification comments and a suggestion for code clean-up.

@@ -33,4 +46,4 @@ jobs:
- run: $Env:RAVEN_LIBS_NAME = "raven_libraries_"+(Get-Location).Path.Split("\")[-4]; bash ../raven/scripts/establish_conda_env.sh --install --conda-defs $HOME/Miniconda3/etc/profile.d/conda.sh
- run: cd ../raven; bash ./build_raven
- run: bash ../raven/run_tests --library-report
- run: bash ../raven/run_tests -j4 --plugins --re=HERON
- run: bash ../raven/run_tests -j4 --plugins --re=HERON/tests
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just to confirm -- this doesn't change any functionality, it just cleans up the reporting? since there are no tests run outside of the HERON/tests directory?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, because of some strange paths in the github runner, with the previous version all plugin tests for HERON, TEAL, and the ExamplePlugin were being run. This limits it to only the HERON tests.

# Reducing the frequency of coverage checks may be preferable if this increases.
# report_py_coverage is being called twice, once within check_py_coverage to print to the terminal and once here to get data for the annotation
- run: >
bash coverage_scripts/check_py_coverage.sh -j4 &&
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so all of these commands have to be within the same run command because they rely on having the same environment variable definitions, I presume. all of the &&s get a little confusing, at least for me, so I think setting some variables at the top of this .yml might help clean up the code? That way you can refer to things like DATA_FILE without having to define it all within this run job. See the env section defined above the jobs section here: https://docs.github.com/en/actions/writing-workflows/choosing-what-your-workflow-does/store-information-in-variables#defining-environment-variables-for-a-single-workflow

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I put them all in the same run mostly because they're related, so I just assumed that the same run would be best for stylistic reasons and just to avoid any potential errors from splitting them up that I might not spot. I agree that it makes more sense to split it though if possible. I can definitely move the DATA_FILE and COV_RCFILE definitions to somewhere more out of the way using the env section. I think I can separate the check_py_coverage line out into its own run job as well.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On second thought, since DATA_FILE and COV_RCFILE definitions both use pwd, I think I do actually need to define them in the same run step as they're used. The check_py_coverage can still be moved to a different run step though.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm, I would assume that pwd would give about the same value each time (if not, other things would fail I think).

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The issue is that, to be accessible where they're used, the environment variables need to either be defined over an entire section with env or defined within the same run, and in env, I don't believe you can run bash commands, including pwd.

@caleb-sitton-inl
Copy link
Collaborator Author

Just a general comment for clarification: seems there are two ways of accessing the coverage, correct?

  1. download the "Artifacts" left over from the coverage run in the "Checks" tab or
  2. navigate here and scroll to the bottom of the page:
Also, an additional note that after this PR is merged, @caleb-sitton-inl it may be good to summarize this in a Discussion for developers in this repo

Yes, you can view the list of artifacts (which currently just contains the coverage results) in either of these ways. As for the Discussion, sounds good; I'll go ahead and create that once it's merged.

Copy link
Collaborator

@joshua-cogliati-inl joshua-cogliati-inl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This generally looks reasonable.

SRC_DIR=`(cd src && pwd)`

# get display var
DISPLAY_VAR=`(echo $DISPLAY)`
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this not just DISPLAY_VAR=$DISPLAY. or possibly DISPLAY_VAR=${DISPLAY:-default} if you need a default value.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't actually have a great understanding of the importance of the DISPLAY manipulation. This was a carryover from the check_py_coverage script in RAVEN on which this script is based. After studying it some, I don't think it's necessary, and it does not have an effect on the functionality of the script, at least locally, so I'll plan to go ahead and remove lines 12-15 and 26-27.

# Reducing the frequency of coverage checks may be preferable if this increases.
# report_py_coverage is being called twice, once within check_py_coverage to print to the terminal and once here to get data for the annotation
- run: >
bash coverage_scripts/check_py_coverage.sh -j4 &&
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm, I would assume that pwd would give about the same value each time (if not, other things would fail I think).

@caleb-sitton-inl caleb-sitton-inl removed the Do Not Merge PR is not ready for merging label Oct 2, 2024
@PaulTalbot-INL
Copy link
Collaborator

This looks like it's coming along great. Have we documented how to view the coverage reports, and placed that documentation somewhere new developers can get familiar with it?

@caleb-sitton-inl
Copy link
Collaborator Author

This looks like it's coming along great. Have we documented how to view the coverage reports, and placed that documentation somewhere new developers can get familiar with it?

@PaulTalbot-INL Not yet; the current plan is to create a github Discussion with that information once the PR has been merged. Also, when the github action is run, it will create an annotation that should be relatively obvious and will contain both the overall number for coverage as well as a very brief description of how to access the full report.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
priority-normal tasks that may be time sensitive, but are not necessarily critical
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants