Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Replace constraint tests #1079

Open
p-snft opened this issue Jun 11, 2024 · 5 comments · May be fixed by #1080
Open

Replace constraint tests #1079

p-snft opened this issue Jun 11, 2024 · 5 comments · May be fixed by #1080

Comments

@p-snft
Copy link
Member

p-snft commented Jun 11, 2024

Currently, we use constraint tests to check if a component/ constraint produces the correct lp file. In fact, they are not unit tests but integration tests, including problems associated with that.

  1. The LP files depend on Pyomo internals, thus they change when Pyomo changes things. Also, they are not valid for (hypothetical) other back ends. In the past, we had to change, e.g. because Pyomo changed rounding between binary and decimal representations. Because of race conditions in Pyomo, at the moment, we just test if old and new lp file contain the same lines, not their order. (See faf3c86 to understand that the tests require meaningless changes from time to time.)
  2. They include tests for stuff from multiple parts of the code. For example, if I change the constraint formulation for the Bus, every constraint test needs to be rewritten, not only the one for the Bus.
  3. They are hard to understand (for many) and failed tests are hard to fix. The latter is particularly because of the undefined order of lines, so that just looking at a diff will not tell the difference: Constraints might be in a different order but still the same.

I would suggest to only have unit tests, in the meaning that when you change something in the code, only one corresponding test (or a handful of tests) needs to be rewritten. For example, the tests for components might test their characteristic curves. It will be driven by one of the flows being fixed at a time and check if the other flows show the expected value. This kind of test

  1. will be using Pyomo but the results will be independent from the backend.
  2. is easy to understand and
  3. even internal constraint formulations can be changed without breaking the test, so it really tests for the specified functionally not the implementation.
@p-snft
Copy link
Member Author

p-snft commented Jun 11, 2024

@lensum and I discussed this in a chat. He had some points in favour of lp file diff tests:

  • With I/O tests, the pipeline needs a lot longer to run bc we need to actually solve the problems.
  • Writing tests for complex behaviour (min up-/downtime) can be tedious (though I believe this is also the case with the current pipeline.

And here are some counter-arguments against the points I posted here earlier.

Please understand this as a "devils advocate" situation, I'm not strictly against the change :)

Pyomo dependency
I have little experience with that, but does pyomo regularly change the structure of LP-files they generate? I would expect this to be limited to major version changes, if any.

Also, to bring a benefit wrt other backends:

The backends would always need to provide the functionality to formulate the exact same problem the "basic" solph backend (which ever it might be in the future) does. For all other backends, the tests need to be rewritten either way (for example backends that focus on stochastic optimization and thus create different constraints and results).
I think it also depends on how the different backends are incorporated into solph. If each backend gets its own repository, I (personally) don't see a reason to switch from the tried-and-tested LP-based tests to results-based-tests.
Hard to understand
I recently learned that some people actually prefer to read and compare LP files than result dataframes, so this might be personal preference (though I'm with you on this one).

Multiple parts of the code
I think both possible solutions (LP-based tests and solution-based tests) will test stuff from multiple parts of the code. Where the LP-file probably directly shows me the involved components in the constraints (making it easy to see if its a harmless thing like a renaming of a component, for example), the solution-based tests might make it a little harder to find the reason for the test failure. (It could very well be that this opinion/experience is influenced by bad testing styles ;) ).

Changing constraint formulation without failing tests
True, this can be a benefit. But if I'm not mistaken it also requires the tests to cover every edge-case. Or to put it differently: it would be possible to create situations where a test would pass, but some configuration of the constraint behaves differently than previously, due to the constraint-changes. (Might be due to bad tests).

@p-snft p-snft changed the title Remove constraint tests Replace constraint tests Jun 11, 2024
@p-snft p-snft linked a pull request Jun 11, 2024 that will close this issue
@lensum
Copy link
Contributor

lensum commented Jun 12, 2024

* With I/O tests, the pipeline needs a lot longer to run bc we need to actually solve the problems.

Is it possible to limit the testing pipeline to merges into dev (and changes on dev)? This way, devs could still run the tests locally, push changes to other branches without causing unnecessary computations and everything that makes it into dev would be tested. Or is this how it's implemented right now?

@p-snft
Copy link
Member Author

p-snft commented Jun 12, 2024

Is it possible to limit the testing pipeline to merges into dev (and changes on dev)? This way, devs could still run the tests locally, push changes to other branches without causing unnecessary computations and everything that makes it into dev would be tested. Or is this how it's implemented right now?

I think this does not improve the situation too much, as most of the changes are at branches with an existing request to merge into dev. Maybe, it's more fruitful to have a model stub tailored in a way that solving is fast.

@lensum
Copy link
Contributor

lensum commented Jun 12, 2024

most of the changes are at branches with an existing request to merge into dev.

True, I didn't think of this circumstance.

Maybe, it's more fruitful to have a model stub tailored in a way that solving is fast.

Sorry, I don't understand what you mean with "model stub" here. Do you mean that
a) the optimization problems in the testing pipeline should be designed so that they solve fast (possibly using pytext-fixtures), or
b) there should be a separate Model class for testing purposes that somehow solves faster, or
c) something else I can't think of?

@p-snft
Copy link
Member Author

p-snft commented Jun 12, 2024

I mean option A). There should be a simplistic model with just slack buses and one component in the centre, everything set up, so that it's quick to create and optimise the model and to evaluate the results.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants