-
Notifications
You must be signed in to change notification settings - Fork 282
fix(l1 follower, rollup verifier): blockhash mismatch #1192
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: jt/export-headers-toolkit
Are you sure you want to change the base?
Conversation
Important Review skippedAuto reviews are disabled on base/target branches other than the default branch. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
Semgrep found 1 Risk: Affected versions of github.com/btcsuite/btcd are vulnerable to Always-Incorrect Control Flow Implementation. The btcd Bitcoin client did not correctly re-implement Bitcoin Core's "FindAndDelete()" functionality. This logic is consensus-critical: the difference in behavior with the other Bitcoin clients can lead to btcd clients accepting an invalid Bitcoin block (or rejecting a valid one). Fix: Upgrade this library to at least version 0.24.2-beta.rc1 at go-ethereum/rollup/missing_header_fields/export-headers-toolkit/go.mod:14. Reference(s): GHSA-27vh-h6mc-q6g8, CVE-2024-38365 |
…port reset of syncing pipeline
…ng header fields file
@@ -1880,15 +1880,17 @@ func (bc *BlockChain) BuildAndWriteBlock(parentBlock *types.Block, header *types | |||
|
|||
header.ParentHash = parentBlock.Hash() | |||
|
|||
// sanitize base fee | |||
if header.BaseFee != nil && header.BaseFee.Cmp(common.Big0) == 0 { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What's the reason to reset the header's BaseFee?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it's due to differences in serialization and therefore the block hash. big.Int == 0
is serialized while big.Int == nil
is not. during testing it became clear that this needs to be done so the block hashes will end up matching
return nil, fmt.Errorf("cannot create missing header fields manager: %w", err) | ||
} | ||
|
||
eth.syncingPipeline, err = da_syncer.NewSyncingPipeline(context.Background(), eth.blockchain, chainConfig, eth.chainDb, l1Client, stack.Config().L1DeploymentBlock, config.DA, missingHeaderFieldsManager) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not init missingHeaderFieldsManager
inside the SyncingPipeline?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
there's too many dependencies (and implicit behavior) so that imo it's more fitting to do it when setting up the node
@@ -2,17 +2,19 @@ module github.com/scroll-tech/go-ethereum/export-headers-toolkit | |||
|
|||
go 1.22 | |||
|
|||
replace github.com/scroll-tech/go-ethereum => ../../.. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
for test?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No it's so this separate submodule uses the correct version of the l2geth code base. Since the export toolkit from #903 is only used separately to export block headers once this should be ok
downloadURL, err := url.Parse(stack.Config().DAMissingHeaderFieldsBaseURL) | ||
if err != nil { | ||
return nil, fmt.Errorf("invalid DAMissingHeaderFieldsBaseURL: %w", err) | ||
} | ||
downloadURL.Path = path.Join(downloadURL.Path, chainConfig.ChainID.String()+".bin") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about:
downloadUrl := stack.Config().DAMissingHeaderFieldsBaseURL + "/" + chainConfig.ChainID.String()+".bin"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since the flag is a user input I think the correct way is sanitizing the flag properly with url.Parse
and subsequently using path.Join
to join as this will remove any duplicate /
and so forth when concatenating the different url segments
|
||
func (m *Manager) GetMissingHeaderFields(headerNum uint64) (difficulty uint64, stateRoot common.Hash, coinbase common.Address, nonce types.BlockNonce, extraData []byte, err error) { | ||
// lazy initialization: if the reader is not initialized this is the first time we read from the file | ||
if m.reader == nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
how about using sync.Once
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the missing header fields manager is not suitable for concurrent usage. therefore I just used this simple nil check. It's been tested and verified to work on Sepolia and Mainnet
another question: who upload the |
This is uploaded by us by utilizing the toolkit from #903. It's a one time operation and as you can see in this PR the hash of the data is added to the genesis. |
@@ -898,6 +898,12 @@ var ( | |||
Name: "da.sync", | |||
Usage: "Enable node syncing from DA", | |||
} | |||
DAMissingHeaderFieldsBaseURLFlag = cli.StringFlag{ | |||
Name: "da.missingheaderfields.baseurl", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nitpick, but something like "block data hint" sounds nicer.
@@ -1880,15 +1880,17 @@ func (bc *BlockChain) BuildAndWriteBlock(parentBlock *types.Block, header *types | |||
|
|||
header.ParentHash = parentBlock.Hash() | |||
|
|||
// sanitize base fee | |||
if header.BaseFee != nil && header.BaseFee.Cmp(common.Big0) == 0 { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will this break if we set base fee to 0? Or that is already invalid?
@@ -1901,7 +1903,11 @@ func (bc *BlockChain) BuildAndWriteBlock(parentBlock *types.Block, header *types | |||
|
|||
// finalize and assemble block as fullBlock: replicates consensus.FinalizeAndAssemble() | |||
header.GasUsed = gasUsed | |||
header.Root = statedb.IntermediateRoot(bc.chainConfig.IsEIP158(header.Number)) | |||
|
|||
// state root might be set from partial header. If it is not set, we calculate it. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is set for pre-Euclid zktrie roots, right?
computedChecksum := h.Sum(nil) | ||
if !bytes.Equal(computedChecksum, m.expectedChecksum[:]) { | ||
return fmt.Errorf("expectedChecksum mismatch, expected %x, got %x", m.expectedChecksum, computedChecksum) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If there's a partially downloaded file, or some random file, will we be stuck here? Should we remove the file and download again?
} | ||
func (m *Manager) Close() error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
} | |
func (m *Manager) Close() error { | |
} | |
func (m *Manager) Close() error { |
return fmt.Errorf("failed to create download request: %v", err) | ||
} | ||
|
||
resp, err := http.DefaultClient.Do(req) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder if we can run into the previous "download hangs" issue. It would be nice to time out if we don't receive any data (but if we do, then `timeoutDownload is suitable).
} | ||
|
||
func (m *Manager) GetMissingHeaderFields(headerNum uint64) (difficulty uint64, stateRoot common.Hash, coinbase common.Address, nonce types.BlockNonce, extraData []byte, err error) { | ||
// lazy initialization: if the reader is not initialized this is the first time we read from the file |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So lazy init means that on devnets, new networks, etc. this is never called, right?
1. Purpose or design rationale of this PR
This PR fixes the problem of mismatching block hashes due to the missing header fields
difficulty
andextraData
in DA. It should be reviewed in conjunction with #903, which provides a way to prepare this missing data and describes the format in more detail.Specifically, this PR implements a missing header fields manager that:
Tested on Sepolia:
After a restart the header file is not downloaded again. The L1 syncing process can just continue. As can be seen the generated block hash matches.
Synced to the latest finalized block:

Tested on mainnet:
After a restart the header file is not downloaded again. The L1 syncing process can just continue. As can be seen the generated block hash matches.
Synced to the latest finalized block:

2. PR title
Your PR title must follow conventional commits (as we are doing squash merge for each PR), so it must start with one of the following types:
3. Deployment tag versioning
Has the version in
params/version.go
been updated?4. Breaking change label
Does this PR have the
breaking-change
label?