-
Notifications
You must be signed in to change notification settings - Fork 42
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: and optimising fetching membership events #706
Conversation
Jenkins BuildsClick to see older builds (54)
|
I like the changes related to:
However I think the current mechanism for loading events (using chunks) is more appropiate compared to using the proposed binary split, which was one of the initial approaches I had considered for loading the events. Following this approach, while it works for Infura because it uses an offchain index for event logs, it might overwhelm a node you're running locally. The approach we're currently follow in go-waku is the same as eth2 nodes like prysm in which we fetch block ranges in chunks of 5000 blocks (prysm goes even lower at 1000 blocks per chunk), with some special handling for cases where we exceed the number of events, which is using the While I agree that using a binary split might be faster, it causes more load over a local node, and also, syncing the chain in 5K blocks is still a fast operation that takes only some 10s of seconds and it's only executed once at the beginning, and since we already store the merkle tree in the RLN database, |
In this PR , we are also syncing in batches of 5k within the loadOldevents. it is just that if the getEvents call for 5k blocks fails with over-the-limit error, We split the work in 2 jobs of half range in getEvents. |
Yes but do notice that subsequent requests will still try to load the 5000 requests and split it into separate jobs which on a period of heavy usage of the smart contract will mean that the tooMuchData error should appear frequently for some range periods while trying to load the events, while current approach will not use 5000, but the last valid chunk size + 10% in an attempt to slowly get back to to the original 5000. Do you think the additive factor + chunk size could be kept with the approach you propose? |
The current approach already divides in half by multiplicativedivisor in case of error so including additivefactor for increasing batch size will result in same sort of code. So i can revert that change. But i also think that this error wont be frequent and will only happen during the start of the node. Plus all the waku node will probably have connection to different eth rpc. Thats why we can try with same batchsize instead of setting batchsize to decreased value and increasing over time. Again this is subjective, i am comfortable with both the approach. let me know whats your verdict after this discussion. |
I see. Not sure either what decision to take, I tend to prefer the original approach but I agree with the argument you present! |
Had a quick discussion with @harsh-98 regarding both the approaches. I would suggest, we go ahead with the approach suggested by @harsh-98. |
@richard-ramos @chaitanyaprem This PR has become quite big as i had to rebase on master to account for web3Config. Bcz of that i had to refactor a bit. Also can you guys review this before there are more merge conflicts.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Left some observations.
Sorry for the rebase issues! but having said that, these will keep happening as the RLN feature is under heavy development. Even tho i'm one who always breaks this rule, Let's try to keep the PRs small!
waku/v2/protocol/rln/group_manager/dynamic/membership_fetcher_test.go
Outdated
Show resolved
Hide resolved
waku/v2/protocol/rln/group_manager/dynamic/membership_fetcher_test.go
Outdated
Show resolved
Hide resolved
waku/v2/protocol/rln/group_manager/dynamic/membership_fetcher_test.go
Outdated
Show resolved
Hide resolved
Commenting so that i remember , how to test a PR local.
|
waku/v2/protocol/rln/group_manager/dynamic/membership_fetcher_test.go
Outdated
Show resolved
Hide resolved
waku/v2/protocol/rln/group_manager/dynamic/membership_fetcher_test.go
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do address comments.
Let's avoid such refactor's combining with fixes and other changes, it makes review very hard.
LGTM
Do wait for ok from @richard-ramos as he is actively working on this.
waku/v2/protocol/rln/group_manager/dynamic/membership_fetcher.go
Outdated
Show resolved
Hide resolved
rlnInstance, rootTrack were previously created while creating rlnRelay but were assigned to groupManager on Start of rlnRelay. This created unncessary dependency of passing them to static and dynamic group manager. Web3Config uses interface EthClientI for client, so that we can pass mock client for testing MembershipFetcher.
Description
Bug and other optimisation found for membership fetching logic while checking #701.
Changes
Tests