Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug-Candidate]: Sequences from corpus do not adhere to time/block delay settings #1231

Open
rappie opened this issue Apr 5, 2024 · 6 comments

Comments

@rappie
Copy link

rappie commented Apr 5, 2024

Describe the issue:

Sequences that have long time delays are still used, even in cases where I've changed the settings to this:

maxTimeDelay: 1
maxBlockDelay: 1

Code example to reproduce the issue:

  1. Run Echidna normally
  2. Have an invariant break using long time/block delays that would not break with short delays
  3. Store corpus
  4. Lower time delays in config
  5. Run Echidna with new config
  6. Invariant still breaks even though the new config would not allow the sequences used to break it

Version:

Echidna 2.2.3

Relevant log output:

No response

@ggrieco-tob
Copy link
Member

This is expected, the echidna configuration will not enforce anything in the corpus. I think the same will happen with the max ether sent. Perhaps this can be a feature of fuzz-utils, where a certain corpus is "modified" to do X instead of Y. @rappie Can you please start a discussion there for this feature?

@rappie
Copy link
Author

rappie commented May 6, 2024

Personally I would expect the config settings to be overruling everything, so I would consider this a bug or at least unexpected behaviour. I've had similar issues in the past where i had to discard my corpus because I needed to blacklist certain functions (that were still in the corpus).

Is there no possibility check sequences from the corpus against the allowed settings before they are executed?

@mds1
Copy link

mds1 commented May 17, 2024

@rappie What is your expected behavior here? My intuition is to agree with @ggrieco-tob that the corpus should still run with its original config.

Are you suggesting to modify the corpus based on the new config, or to skip sequences that don't have compatible configs? Modifying the corpus seems like it may cause sequences to unexpectedly pass only due to the config change, but skipping non-applicable ones seems ok. Based on crytic/fuzz-utils#51 I think you are suggesting modification? Interested in better understanding your use case

@rappie
Copy link
Author

rappie commented May 17, 2024

The behaviour I expect is that the config leads, meaning sequences in the corpus which are illegal according to the current config should be skipped.

I'm not a fan of rewriting the corpus, this seems error prone and could lead to unpredictable/unwanted behaviour.

My reasoning behind this:

  • The config describes how the fuzzer works and which actions are allowed
  • The corpus is a repository of interesting sequences to help the fuzzer discover new coverage in its current run

@ggrieco-tob
Copy link
Member

I'm still not sure about this. Let's suppose you do a campaign using sender: [A, B]. Later you decide to change using sender: [C, D]. Are you suggesting echidna should reinterpret the corpus using the new parameters? I think it is better to say the user: the following config options are going to invalidate the current corpus. We can even warn the user if changed some parameter that affects the current corpus.
It seems that if you really want to preserve the corpus, changing some parameters, rewriting it (if possible), is the best option.

@rappie
Copy link
Author

rappie commented Jun 11, 2024

Let's suppose you do a campaign using sender: [A, B]. Later you decide to change using sender: [C, D]. Are you suggesting echidna should reinterpret the corpus using the new parameters?

I'm not suggesting Echidna should ever reinterpret the corpus. In this case it would look at the transactions and consider them all invalid, skipping them all and you're basically starting with a new corpus.

This is an extreme example though, and in this case it would indeed make sense to just discard the corpus.

The scenario I'm thinking about with filtering senders is if you are using a closed system of actors (lets say actor A,B,C) and you want to debug an issue by reducing the amount of actors and removing actor C. In this case you don't want the corpus to include any transactions by actor C because it will mess up the closed system. You don't want to discard the corpus either because you are just temporary debugging, plus it might contain interesting transactions/values.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants