Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

refactor: set rollup-da to near original repo. go mod tidy #165

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

taco-paco
Copy link
Contributor

No description provided.

@taco-paco taco-paco requested a review from Hyodar May 17, 2024 10:01
@Hyodar
Copy link
Contributor

Hyodar commented May 17, 2024

Noticed the blobs are not being properly decoded on int test. This is most likely the reason why: Nuffle-Labs/data-availability@f771f36. Now the blobs are only data and the blobs we were using prior to that are LegacyBlobs. This is not a trivial change because we then need to manage getting the commitment from somewhere (or maybe use just the transactionId?) and coordinate an update. I'd like it if we could be compatible with both blob formats on the operator. Oh, and there's also a Go sidecar client now.
Please check the possible approaches here and let's keep this open a bit more.

Copy link
Contributor

@Hyodar Hyodar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As mentioned above.

feat: commitment computed on indexer side. Should probably be done in contract itself
@taco-paco taco-paco requested a review from Hyodar May 22, 2024 08:47
@taco-paco
Copy link
Contributor Author

taco-paco commented May 22, 2024

The issue is fixed by having commitment computed on the indexer side. I assume near-da contract will be updated in the future and maybe will include some commitment. Will see.


pub(crate) fn try_transform_payload(payload: &[u8]) -> Result<Vec<u8>> {
let submit_request: SubmitRequest = borsh::de::from_slice(payload)?;
let commited_blob: CommitedBlob = Blob::new(submit_request.data).into();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

commited -> committed

@Hyodar
Copy link
Contributor

Hyodar commented May 22, 2024

Overall everything seems great, what I don't like too much about this is that it's breaking, and integrating this would require full coordination between nodes and relayer. Even though we are under development, it'd be a lot better if we just try decoding it as a legacy blob and, if it doesn't work, we decode it as the current blob format, also keeping compatibility in the MQ data.

@taco-paco
Copy link
Contributor Author

Overall everything seems great, what I don't like too much about this is that it's breaking
I don't think we shall be concerned at this point about backwards compatibility. It will overcomplicate the logic which is IMO unnecessary right now.

integrating this would require full coordination between nodes and relayer
It will require at least updating relayer, otherwise there's no sense in this PR.

decoding it as a legacy blob and, if it doesn't work, we decode it as the current blob format, also keeping compatibility in the MQ data.
If you imply relayer-indexer side. It would mean that relayer didn't update and then there's no point in this pr, If this is about indexer-operator I would prefer avoid these complications since we can at this stage

@Hyodar
Copy link
Contributor

Hyodar commented May 28, 2024

Sadly with a running environment we need to worry about compatibility. It's not easy to coordinate updates, so breaking updates should be either scheduled or done in a way that's smooth - e.g. once every operator is running the backwards-compatible version, we can introduce the new version and nothing breaks. For this I do think we can do it with not much of an issue, so I'd prefer the latter approach.

@taco-paco
Copy link
Contributor Author

It’s not only about this PR but also about other breaking changes like this. If you choose migrations for when they can be avoided you will have a spaghetti code from one version to another. Right now there’s no reason for that. This pr can be postponed until scheduled update and then merged

@Hyodar
Copy link
Contributor

Hyodar commented May 28, 2024

I agree, I don't think this one is urgent. Though my understanding is there can be a specific judgment on whether something needs to be breaking or not or how big the impact of a compatibility layer is. If it does result in a big rework, then not worth it, otherwise being a small adjustment it's better to advance with the feature and deprecate - then drop backward compatibility later.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants