Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"My Cases" fail to load when connected to a full node instead of infura #169

Open
Tristannn1337 opened this issue Apr 3, 2021 · 11 comments

Comments

@Tristannn1337
Copy link

I have my own full node that I connect MetaMask to. It works for listing token balances, joining courts, swapping tokens, etc. The My Cases page and the Cases panel on the Home page both fail to ever load in when I'm connected to my full node but they will load in if I connect to Infura. I have 0 cases of any kind in my account.

@0xferit
Copy link
Member

0xferit commented Apr 3, 2021

Can you please provide some screenshots and also console logs?

@Tristannn1337
Copy link
Author

Full Node
FullNodeHome
FullNodeMyCases

Infura
InfuraHome
InfuraMyCases

Lemme know if I can do anything else!

@shalzz
Copy link
Contributor

shalzz commented Apr 5, 2021

This looks like it could be an issue with your full node.
Is your full node completely synced and running on mainnet? If so can you take a look in the network tab and try to call the RPC request directly to your full node. Please post your results here.

@Tristannn1337
Copy link
Author

Sorry it's taken so long for me to respond, this is requiring that I dive deeper than I have previously ever done with Geth.

I've managed to narrow the issue down to an eth_getLogs request that is timing out. The request includes a fromBlock but no toBlock, and it appears to be too many blocks for my node to handle at once without modifying geth's defaults. I can definitely confirm that my node is capable of responding to eth_getLogs requests though, is fully synced and on mainnet.

In my testing, I found that requesting fromBlock:0x600000 and toBlock:0x700000 took maybe a second or two for a response while going all the way fromBlock:0x600000 toBlock:0x800000 resulted in a timeout.

@hbarcelos
Copy link
Contributor

What's the version of geth that your node is currently using?
Also I think a Full Node is not enough, you'd be required to have an Archive Node in order to be able to use Kleros.

@Tristannn1337
Copy link
Author

At the time, it was latest. I think I'm behind a version right now.

If an archive node is required, then consider this a request to support full nodes. =)

@hbarcelos
Copy link
Contributor

hbarcelos commented Jun 25, 2021

If an archive node is required, then consider this a request to support full nodes. =)

Unfortunately this is not possible.
The Kleros Court is required to be able to load every dispute ever created. Full Nodes store only the last 128 blocks. That's why we need an Archive Node.

For more info regarding the different types of nodes, please read this article: Ethereum Full Node vs Archive Node.

@Tristannn1337
Copy link
Author

Tristannn1337 commented Jun 26, 2021

The current state of the entire blockchain is accessible with a full node. Archive nodes make the history of the current state accessible. The 128 number is the number of recent blocks where both the current state and previous state would both be accessible. Archive nodes should only really be necessary for block explorers.

So, if you need an archive node, that means you're overwriting old disputes with new disputes.

@hbarcelos
Copy link
Contributor

Not everything is stored directly on state.

Kleros' contracts use Events because otherwise it would be way too expensive to store everything related to disputes.
Ethereum Events and Logs are a cheaper form of storage because only their hash (technically it's a Bloom Filter) is stored on-chain.

For example, when someone submits an evidence to a case, the call will produce an Evidence event as described by the ERC-1497: Evidence Standard. The Evidence event has the following payload:

event Evidence(
        IArbitrator indexed _arbitrator, # The address of the arbitrartor contract (i.e.:  the current KlerosLiquid contract)
        uint256 indexed _evidenceGroupID, # The group of evidences this belongs to. Usually there's 1 group for each item that could go into arbitration
        address indexed _party, # The address of the party who sent the 
        string _evidence # The URL to the Evidence file, stored on IPFS
);

This means that we are required to reach an archive node in order to be able to reconstruct the underlying data from the Bloom Filters stored in blocks.

Without an archive node, none of the information highlighted below could be displayed:

image

@Tristannn1337
Copy link
Author

I confirmed that my full node is perfectly capable of retrieving logs from a large number of blocks at once from any distance in the chain's history, but that the request from the Kleros dApp is requesting to search every block since account creation. It would probably be faster if it was an archive node, but it is still perfectly capable. All that said, state expiry is on the horizon... so who knows what exactly that will mean. It's probably not worth discussing this further.

@hbarcelos
Copy link
Contributor

hbarcelos commented Jun 30, 2021

Interesting... 🤔

I confirmed that my full node is perfectly capable of retrieving logs from a large number of blocks at once from any distance in the chain's history [...]

I am not familiar with how geth does this, but I'd guess that it connects to some archive node in order to retrieve the raw payload of events then. This is a potential bottleneck, as it could require pretty intensive networking.

[...] but that the request from the Kleros dApp is requesting to search every block since account creation.

The app is configured to go as far away in the past as the block in which the current KlerosLiquid contract was deployed.


I just checked here and all widgets in your prints include at least a filter for your account address.

When using filters on queries, the bloom filter kicks in to help find the information more efficiently. AFAIK, this step can be performed on a full node, because the bloom filter is stored directly in state. Only after the query is matched against specific blocks the full node needs to query an archive node for the event payloads of the respective blocks.

So unless your account interacted with KlerosLiquid in a huge amount of blocks, the query shouldn't take too long.

💡 Maybe this means that your node is being throttled by the archive node it connects to and this is causing the request to timeout.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants