Skip to content

fix(l1 follower, rollup verifier): blockhash mismatch #1192

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 22 commits into
base: jt/export-headers-toolkit
Choose a base branch
from

Conversation

jonastheis
Copy link

@jonastheis jonastheis commented May 29, 2025

1. Purpose or design rationale of this PR

This PR fixes the problem of mismatching block hashes due to the missing header fields difficulty and extraData in DA. It should be reviewed in conjunction with #903, which provides a way to prepare this missing data and describes the format in more detail.

Specifically, this PR implements a missing header fields manager that:

  • lazily downloads the missing header data if not present
  • verifies SHA256 checksum of the missing header fields data with hardcoded checksum that is part of the chain config
  • provides functionality to read the missing header fields when syncing from DA

Tested on Sepolia:

make nccc_geth && ./build/bin/geth --scroll-sepolia --scroll-mpt \
--datadir "tmp/sepolia-test" \
--gcmode archive \
--http --http.addr "0.0.0.0" --http.port 8545 --http.api "eth,net,web3,debug,scroll" --http.vhosts "*" \
--da.sync=true \
--da.blob.blobscan "https://api.sepolia.blobscan.com/blobs/" \
--da.blob.beaconnode "<beacon node>" \
--l1.endpoint "<L1 endpoint>" \
--verbosity 3

INFO [06-02|10:38:58.797] Downloading missing header fields. This might take a while... url=https://scroll-block-missing-metadata.s3.us-west-2.amazonaws.com/534351.bin
INFO [06-02|10:38:59.560] Downloading missing header fields... 0 MB / 801 MB 
INFO [06-02|10:39:04.586] Downloading missing header fields... 7 MB / 801 MB
[...]
INFO [06-02|10:42:25.322] L1 sync progress                         L1 processed=4,041,836 L1 finalized=8,459,461 progress(%)=47.779 L2 height=4370 L2 hash=0x89432d03f5f784437327a599d16a32bbc787d344e8bd5f5963c6f21c38479e0a

After a restart the header file is not downloaded again. The L1 syncing process can just continue. As can be seen the generated block hash matches.

$ cast block 4370 --rpc-url=https://sepolia-rpc.scroll.io


baseFeePerGas
difficulty           2
extraData            0xd883040321846765746888676f312e31392e31856c696e757800000000000000d83352a5203d2199e97ef277d284adeb6a0304925e8ccdc2abe09c402b8d6d1a3433bbc09a57ffd77be1dc366341a889e01918ad49c1cb54f4ed56c465c3183600
gasLimit             8000000
gasUsed              55018
hash                 0x89432d03f5f784437327a599d16a32bbc787d344e8bd5f5963c6f21c38479e0a
logsBloom            0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000400000000000000000000000000000000000000000000000000800000000000000000000000000000000000000000000000000000000000c00000000000000000000000000000000000000000000800000000000000000000200000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000100000000000000000200000000000000000200000000000000000000000000000000000000000000000000000000000000000000000000000000
miner                0x0000000000000000000000000000000000000000
mixHash              0x0000000000000000000000000000000000000000000000000000000000000000
nonce                0x0000000000000000
number               4370
parentHash           0x8b0b592d47119997ec6f6d5db399f712084ae8a3ab107122f66053c32d334db4
parentBeaconRoot
transactionsRoot     0x59e4d1a6e7e106a488aaef32bdac3ba8bbf0fe9e86f696000edfb777bf112c17
receiptsRoot         0x96551d06a025e4f35423661be2b1272341f87b5e362f33c39758fd4979fc3a7a
sha3Uncles           0x1dcc4de8dec75d7aab85b567b6ccd41ad312451b948a7413f0a142fd40d49347
size                 991
stateRoot            0x18dcddc6754f95f0209089b0e139858cf27e4420b5e7077a4893e1775486762e
timestamp            1691652216 (Thu, 10 Aug 2023 07:23:36 +0000)
withdrawalsRoot
totalDifficulty      8741
blobGasUsed
excessBlobGas
requestsHash
transactions:        [
	0x3ad69e516079396cbd3240d0b229c06c881ca63b69333eafc97792a6bfb747c5
]

Synced to the latest finalized block:
image

Tested on mainnet:

make nccc_geth && ./build/bin/geth --scroll --scroll-mpt \
--datadir "tmp/mainnet-test" \
--gcmode archive \
--http --http.addr "0.0.0.0" --http.port 8545 --http.api "eth,net,web3,debug,scroll" --http.vhosts "*" \
--da.sync=true \
--da.blob.beaconnode "<beacon node>" \
--l1.endpoint "<L1 RPC>" \
--verbosity 3

INFO [06-02|12:40:19.157] Downloading missing header fields. This might take a while... url=https://scroll-block-missing-metadata.s3.us-west-2.amazonaws.com/534352.bin
INFO [06-02|12:40:20.179] Downloading missing header fields... 0 MB / 1407 MB 
INFO [06-02|12:40:25.206] Downloading missing header fields... 16 MB / 1407 MB 
[...]
INFO [06-02|10:42:25.322] L1 sync progress                         L1 processed=4,041,836 L1 finalized=8,459,461 progress(%)=47.779 L2 height=141498 L2 hash=0x8828523c2aca67fc54fc52ea45fd3b10743001e7cbc5d0e704cfb3d53ab7b9c7

After a restart the header file is not downloaded again. The L1 syncing process can just continue. As can be seen the generated block hash matches.

$ cast block 141498 --rpc-url=https://rpc.scroll.io


baseFeePerGas
difficulty           2
extraData            0xd883050001846765746888676f312e31392e31856c696e757800000000000000760fcd08267675948a9bd19199347296df16cdf0bb82cd7f36795629ca36ce750cfcd76ee934237b28fdb541e34bf09e41f01447e8e4b9f33601c1195499f59301
gasLimit             10000000
gasUsed              159943
hash                 0x8828523c2aca67fc54fc52ea45fd3b10743001e7cbc5d0e704cfb3d53ab7b9c7
logsBloom            0x00000000000000000000000000000000000000000000000000000000000002000000000000000000000040000000000000000000000000000000000000000000000000000000000000000008000008000000000000000009000000000000000000000000020008000000000000000800000000000000000000000010000420000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000002000000000008000000000000000000000000000000000000000020000000000000002000000000000000000000020000000000000000000000000100
miner                0x0000000000000000000000000000000000000000
mixHash              0x0000000000000000000000000000000000000000000000000000000000000000
nonce                0x0000000000000000
number               141498
parentHash           0xb57533a8ef8414506bd8af48884ab7283a653fe1485ea84156921237b0d6691e
parentBeaconRoot
transactionsRoot     0x6c533608b1915b1cdee26aab0f74acd509d37ad99c950fb61e50154751e02f4e
receiptsRoot         0xf39d1001ac70825f2ff1c341efd025b7fbc98812723830f392779f7758f70b8b
sha3Uncles           0x1dcc4de8dec75d7aab85b567b6ccd41ad312451b948a7413f0a142fd40d49347
size                 722
stateRoot            0x14dcf7f26a6d706785ede826ae68953b0c42c2286ececf260bd1ac0534c0a181
timestamp            1697749500 (Thu, 19 Oct 2023 21:05:00 +0000)
withdrawalsRoot
totalDifficulty      282997
blobGasUsed
excessBlobGas
requestsHash
transactions:        [
	0x1f22baba6174ded8c15abbb20ac9ad21088b26981d789d8783bd1caf1472537d
]

Synced to the latest finalized block:
image

2. PR title

Your PR title must follow conventional commits (as we are doing squash merge for each PR), so it must start with one of the following types:

  • build: Changes that affect the build system or external dependencies (example scopes: yarn, eslint, typescript)
  • ci: Changes to our CI configuration files and scripts (example scopes: vercel, github, cypress)
  • docs: Documentation-only changes
  • feat: A new feature
  • fix: A bug fix
  • perf: A code change that improves performance
  • refactor: A code change that doesn't fix a bug, or add a feature, or improves performance
  • style: Changes that do not affect the meaning of the code (white-space, formatting, missing semi-colons, etc)
  • test: Adding missing tests or correcting existing tests

3. Deployment tag versioning

Has the version in params/version.go been updated?

  • This PR doesn't involve a new deployment, git tag, docker image tag, and it doesn't affect traces
  • Yes

4. Breaking change label

Does this PR have the breaking-change label?

  • This PR is not a breaking change
  • Yes

Copy link

coderabbitai bot commented May 29, 2025

Important

Review skipped

Auto reviews are disabled on base/target branches other than the default branch.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@jonastheis jonastheis changed the base branch from develop to jt/export-headers-toolkit May 29, 2025 07:40
Copy link

semgrep-app bot commented May 29, 2025

Semgrep found 1 ssc-089edcd4-740d-452f-b7f4-23e72908be35 finding:

  • rollup/missing_header_fields/export-headers-toolkit/go.mod

Risk: Affected versions of github.com/btcsuite/btcd are vulnerable to Always-Incorrect Control Flow Implementation. The btcd Bitcoin client did not correctly re-implement Bitcoin Core's "FindAndDelete()" functionality. This logic is consensus-critical: the difference in behavior with the other Bitcoin clients can lead to btcd clients accepting an invalid Bitcoin block (or rejecting a valid one).

Fix: Upgrade this library to at least version 0.24.2-beta.rc1 at go-ethereum/rollup/missing_header_fields/export-headers-toolkit/go.mod:14.

Reference(s): GHSA-27vh-h6mc-q6g8, CVE-2024-38365

@jonastheis jonastheis mentioned this pull request May 29, 2025
13 tasks
@jonastheis jonastheis requested review from Thegaram and colinlyguo June 2, 2025 10:58
@jonastheis jonastheis requested a review from colinlyguo June 18, 2025 10:31
@@ -1880,15 +1880,17 @@ func (bc *BlockChain) BuildAndWriteBlock(parentBlock *types.Block, header *types

header.ParentHash = parentBlock.Hash()

// sanitize base fee
if header.BaseFee != nil && header.BaseFee.Cmp(common.Big0) == 0 {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's the reason to reset the header's BaseFee?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it's due to differences in serialization and therefore the block hash. big.Int == 0 is serialized while big.Int == nil is not. during testing it became clear that this needs to be done so the block hashes will end up matching

return nil, fmt.Errorf("cannot create missing header fields manager: %w", err)
}

eth.syncingPipeline, err = da_syncer.NewSyncingPipeline(context.Background(), eth.blockchain, chainConfig, eth.chainDb, l1Client, stack.Config().L1DeploymentBlock, config.DA, missingHeaderFieldsManager)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not init missingHeaderFieldsManager inside the SyncingPipeline?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there's too many dependencies (and implicit behavior) so that imo it's more fitting to do it when setting up the node

@@ -2,17 +2,19 @@ module github.com/scroll-tech/go-ethereum/export-headers-toolkit

go 1.22

replace github.com/scroll-tech/go-ethereum => ../../..
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

for test?

Copy link
Author

@jonastheis jonastheis Jul 8, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No it's so this separate submodule uses the correct version of the l2geth code base. Since the export toolkit from #903 is only used separately to export block headers once this should be ok

Comment on lines +350 to +354
downloadURL, err := url.Parse(stack.Config().DAMissingHeaderFieldsBaseURL)
if err != nil {
return nil, fmt.Errorf("invalid DAMissingHeaderFieldsBaseURL: %w", err)
}
downloadURL.Path = path.Join(downloadURL.Path, chainConfig.ChainID.String()+".bin")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about:

downloadUrl := stack.Config().DAMissingHeaderFieldsBaseURL + "/" + chainConfig.ChainID.String()+".bin"

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since the flag is a user input I think the correct way is sanitizing the flag properly with url.Parse and subsequently using path.Join to join as this will remove any duplicate / and so forth when concatenating the different url segments


func (m *Manager) GetMissingHeaderFields(headerNum uint64) (difficulty uint64, stateRoot common.Hash, coinbase common.Address, nonce types.BlockNonce, extraData []byte, err error) {
// lazy initialization: if the reader is not initialized this is the first time we read from the file
if m.reader == nil {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

how about using sync.Once

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the missing header fields manager is not suitable for concurrent usage. therefore I just used this simple nil check. It's been tested and verified to work on Sepolia and Mainnet

@georgehao
Copy link
Member

another question: who upload the https://scroll-block-missing-metadata.s3.us-west-2.amazonaws.com/xxx.bin, blob_uploader?

@jonastheis
Copy link
Author

another question: who upload the https://scroll-block-missing-metadata.s3.us-west-2.amazonaws.com/xxx.bin, blob_uploader?

This is uploaded by us by utilizing the toolkit from #903. It's a one time operation and as you can see in this PR the hash of the data is added to the genesis.

@@ -898,6 +898,12 @@ var (
Name: "da.sync",
Usage: "Enable node syncing from DA",
}
DAMissingHeaderFieldsBaseURLFlag = cli.StringFlag{
Name: "da.missingheaderfields.baseurl",
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nitpick, but something like "block data hint" sounds nicer.

@@ -1880,15 +1880,17 @@ func (bc *BlockChain) BuildAndWriteBlock(parentBlock *types.Block, header *types

header.ParentHash = parentBlock.Hash()

// sanitize base fee
if header.BaseFee != nil && header.BaseFee.Cmp(common.Big0) == 0 {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will this break if we set base fee to 0? Or that is already invalid?

@@ -1901,7 +1903,11 @@ func (bc *BlockChain) BuildAndWriteBlock(parentBlock *types.Block, header *types

// finalize and assemble block as fullBlock: replicates consensus.FinalizeAndAssemble()
header.GasUsed = gasUsed
header.Root = statedb.IntermediateRoot(bc.chainConfig.IsEIP158(header.Number))

// state root might be set from partial header. If it is not set, we calculate it.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is set for pre-Euclid zktrie roots, right?

Comment on lines +73 to +76
computedChecksum := h.Sum(nil)
if !bytes.Equal(computedChecksum, m.expectedChecksum[:]) {
return fmt.Errorf("expectedChecksum mismatch, expected %x, got %x", m.expectedChecksum, computedChecksum)
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If there's a partially downloaded file, or some random file, will we be stuck here? Should we remove the file and download again?

Comment on lines +86 to +87
}
func (m *Manager) Close() error {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
}
func (m *Manager) Close() error {
}
func (m *Manager) Close() error {

return fmt.Errorf("failed to create download request: %v", err)
}

resp, err := http.DefaultClient.Do(req)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if we can run into the previous "download hangs" issue. It would be nice to time out if we don't receive any data (but if we do, then `timeoutDownload is suitable).

}

func (m *Manager) GetMissingHeaderFields(headerNum uint64) (difficulty uint64, stateRoot common.Hash, coinbase common.Address, nonce types.BlockNonce, extraData []byte, err error) {
// lazy initialization: if the reader is not initialized this is the first time we read from the file
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So lazy init means that on devnets, new networks, etc. this is never called, right?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants