-
Notifications
You must be signed in to change notification settings - Fork 1
feat: Kafka message bus for sending webhook events #12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
feat: Kafka message bus for sending webhook events #12
Conversation
WalkthroughThis change introduces Kafka-based event publishing to the transaction execution infrastructure. It adds Kafka configuration, producer initialization, and integration points for emitting transaction lifecycle events (sent and confirmed) within the execution and queue management logic. Supporting types, configuration, and trait abstractions are implemented, with new documentation describing the architecture and development practices. Changes
Sequence Diagram(s)sequenceDiagram
participant Main
participant KafkaProducer
participant QueueManager
participant SendHandler
participant ConfirmHandler
Main->>KafkaProducer: create_kafka_producer(config, env)
Main->>QueueManager: new(..., kafka_producer)
QueueManager->>SendHandler: inject event_sender
QueueManager->>ConfirmHandler: inject event_sender
SendHandler->>KafkaProducer: send_transaction_sent(event)
ConfirmHandler->>KafkaProducer: send_transaction_confirmed(event)
Main->>KafkaProducer: flush() on shutdown
sequenceDiagram
participant User
participant API/Server
participant QueueManager
participant SendHandler
participant KafkaProducer
User->>API/Server: Submit transaction
API/Server->>QueueManager: Enqueue send job
QueueManager->>SendHandler: Process send job
SendHandler->>KafkaProducer: send_transaction_sent(event)
SendHandler->>QueueManager: Enqueue confirmation job
QueueManager->>ConfirmHandler: Process confirmation job
ConfirmHandler->>KafkaProducer: send_transaction_confirmed(event)
Warning There were issues while running some tools. Please review the errors and either fix the tool's configuration or disable the tool if it's a critical failure. 🔧 Clippy (1.86.0)
error: failed to get Caused by: Caused by: Caused by: Caused by: 📜 Recent review detailsConfiguration used: CodeRabbit UI ⛔ Files ignored due to path filters (1)
📒 Files selected for processing (5)
✅ Files skipped from review due to trivial changes (1)
🚧 Files skipped from review as they are similar to previous changes (4)
✨ Finishing Touches
🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🧹 Nitpick comments (8)
executors/Cargo.toml (1)
25-25
: Prefer workspace-level dependency to avoid version skew
async-trait
is now declared here and again inserver/Cargo.toml
. Adding it to the root[workspace.dependencies]
(or viaworkspace = true
) keeps every crate on the same minor/patch version and prevents duplicate compilation units.server/Cargo.toml (2)
16-16
: Path dependency duplicates an existing crate
engine-aa-types
is already a workspace member. Consider replacing the explicitpath = "../aa-types"
withworkspace = true
to inherit version/features from the workspace manifest and avoid accidental divergence.
45-45
: Duplicateasync-trait
declarations across crates
async-trait
is now declared in bothexecutors
andserver
. Consolidate this (workspace-level) to keep versions in lock-step and cut compile time.server/src/execution_router/mod.rs (1)
38-52
: Newkafka_producer
field is presently unused inside this module
SharedKafkaProducer
is added to the struct but never referenced in any method here. That is harmless at compile time yet:
- Consumers must now initialise this field everywhere
ExecutionRouter
is constructed, or the code will not compile.- An unused field can mislead maintainers.
If intentional (value is only passed through to worker queues), consider at minimum a doc-comment explaining its purpose, or add a simple accessor/helper that demonstrates use and silences dead-code lints.
+ /// Shared Kafka producer used by job handlers to emit lifecycle events. + /// Currently only forwarded; not consumed directly inside the router. pub kafka_producer: SharedKafkaProducer,server/src/kafka/mod.rs (4)
33-53
: Consider removing redundant message types.The
TransactionSentMessage
andTransactionConfirmedMessage
structs duplicate the event types fromexecutors/src/kafka_integration.rs
. This creates unnecessary conversion overhead in the trait implementation (lines 359-379).Consider using the event types from the executors crate directly:
-pub struct TransactionSentMessage { - pub transaction_id: String, - pub chain_id: u64, - pub account_address: Address, - pub user_op_hash: Bytes, - pub nonce: U256, - pub deployment_lock_acquired: bool, - pub timestamp: u64, -} - -pub struct TransactionConfirmedMessage { - pub transaction_id: String, - pub user_op_hash: Bytes, - pub receipt: UserOperationReceipt, - pub deployment_lock_released: bool, - pub timestamp: u64, -} +// Use TransactionSentEvent and TransactionConfirmedEvent from executors crate
55-79
: Consider using an enum instead of a struct with many optional fields.The
TransactionWebhookEvent
struct with numerous optional fields makes it unclear which fields are present for different event types. This could lead to runtime errors if the wrong fields are accessed.Consider using an enum to make the event types explicit:
-pub struct TransactionWebhookEvent { - // Core transaction data (always present) - pub transaction_id: String, - pub chain_id: u64, - pub account_address: Address, - pub user_op_hash: Bytes, - pub nonce: U256, - pub timestamp: u64, - - // Send-specific fields (null for confirm events) - pub user_operation_sent: Option<VersionedUserOp>, - pub deployment_lock_acquired: Option<bool>, - - // Confirm-specific fields (null for send events) - pub receipt: Option<UserOperationReceipt>, - pub deployment_lock_released: Option<bool>, - - // Transaction status info (null for send events) - pub success: Option<bool>, - pub actual_gas_cost: Option<U256>, - pub actual_gas_used: Option<U256>, -} +#[derive(Debug, Clone, Serialize)] +#[serde(tag = "type", rename_all = "camelCase")] +pub enum TransactionWebhookEvent { + #[serde(rename = "sent")] + Sent { + transaction_id: String, + chain_id: u64, + account_address: Address, + user_op_hash: Bytes, + nonce: U256, + timestamp: u64, + user_operation_sent: VersionedUserOp, + deployment_lock_acquired: bool, + }, + #[serde(rename = "confirmed")] + Confirmed { + transaction_id: String, + chain_id: u64, + account_address: Address, + user_op_hash: Bytes, + nonce: U256, + timestamp: u64, + receipt: UserOperationReceipt, + deployment_lock_released: bool, + success: bool, + actual_gas_cost: U256, + actual_gas_used: U256, + }, +}
341-348
: Remove redundant standalone function.This function simply delegates to the method on
KafkaProducer
without adding any value.-/// Standalone helper function to send webhook events -pub async fn send_webhook_event( - kafka_producer: &KafkaProducer, - topic: &str, - event: TransactionWebhookEvent, -) { - kafka_producer.send_webhook_event(topic, event).await; -}
356-381
: Simplify trait implementation by removing intermediate conversions.The current implementation creates intermediate message types that duplicate the input event data. If the redundant message types are removed as suggested earlier, this implementation can directly use the input events.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
Cargo.lock
is excluded by!**/*.lock
📒 Files selected for processing (13)
CLAUDE.md
(1 hunks)executors/Cargo.toml
(1 hunks)executors/src/external_bundler/confirm.rs
(5 hunks)executors/src/external_bundler/send.rs
(3 hunks)executors/src/kafka_integration.rs
(1 hunks)executors/src/lib.rs
(1 hunks)server/Cargo.toml
(2 hunks)server/src/config.rs
(2 hunks)server/src/execution_router/mod.rs
(2 hunks)server/src/kafka/mod.rs
(1 hunks)server/src/lib.rs
(1 hunks)server/src/main.rs
(5 hunks)server/src/queue/manager.rs
(4 hunks)
🧰 Additional context used
🪛 LanguageTool
CLAUDE.md
[grammar] ~1-~1: Use proper spacing conventions.
Context: # CLAUDE.md This file provides guidance to Claude Co...
(QB_NEW_EN_OTHER_ERROR_IDS_000007)
[grammar] ~3-~3: Use proper spacing conventions.
Context: ...en working with code in this repository. ## Development Commands ### Build and Test...
(QB_NEW_EN_OTHER_ERROR_IDS_000007)
[grammar] ~5-~5: Use proper spacing conventions.
Context: ...his repository. ## Development Commands ### Build and Test - cargo build --release
...
(QB_NEW_EN_OTHER_ERROR_IDS_000007)
[grammar] ~8-~8: There might be a mistake here.
Context: ...ild --release- Build production binary -
cargo test` - Run all tests including Redis integra...
(QB_NEW_EN_OTHER)
[grammar] ~9-~9: There might be a mistake here.
Context: ...duction binary - cargo test
- Run all tests including Redis integration tests - `ca...
(QB_NEW_EN_OTHER)
[grammar] ~9-~9: There might be a mistake here.
Context: ... tests including Redis integration tests - cargo nextest run -p twmq --profile ci
- Run specific component tests - `RUST_...
(QB_NEW_EN_OTHER)
[grammar] ~10-~10: There might be a mistake here.
Context: ...ofile ci- Run specific component tests -
RUST_LOG=debug cargo run` - Run server with debug logging ### De...
(QB_NEW_EN_OTHER)
[grammar] ~11-~11: Place a period at the end of declarative sentences.
Context: ...rgo run` - Run server with debug logging ### Development Setup Redis is required for ...
(QB_NEW_EN_OTHER_ERROR_IDS_000178)
[grammar] ~14-~14: Use proper spacing conventions.
Context: ...Setup Redis is required for development: bash docker run -d --name redis -p 6379:6379 redis:7-alpine
Required environment variables: ```bash ...
(QB_NEW_EN_OTHER_ERROR_IDS_000007)
[grammar] ~19-~19: Use proper spacing conventions.
Context: ...ine Required environment variables:
bash export APP__THIRDWEB__SECRET="your_secret_key" export APP__THIRDWEB__CLIENT_ID="your_client_id" ``` ## Architecture Overview This is a **Rust ...
(QB_NEW_EN_OTHER_ERROR_IDS_000007)
[grammar] ~25-~25: Use proper spacing conventions.
Context: ...client_id" ``` ## Architecture Overview This is a Rust workspace with 7 crat...
(QB_NEW_EN_OTHER_ERROR_IDS_000007)
[grammar] ~27-~27: Use proper spacing conventions.
Context: ...g blockchain transaction infrastructure: - server/
- Main HTTP API server (Axum-based REST...
(QB_NEW_EN_OTHER_ERROR_IDS_000007)
[grammar] ~35-~35: Use proper spacing conventions.
Context: ...eb service integrations (Vault SDK, IAW) ### Key Technologies - Axum for HTTP ser...
(QB_NEW_EN_OTHER_ERROR_IDS_000007)
[grammar] ~42-~42: Use proper spacing conventions.
Context: ... Vault SDK for secure key management ## Configuration System Hierarchical confi...
(QB_NEW_EN_OTHER_ERROR_IDS_000007)
[grammar] ~44-~44: Use proper spacing conventions.
Context: ... key management ## Configuration System Hierarchical configuration priority: 1. ...
(QB_NEW_EN_OTHER_ERROR_IDS_000007)
[grammar] ~49-~49: Use proper spacing conventions.
Context: ...yaml) 3. Base YAML (
server_base.yaml) Configuration files located in
server/c...
(QB_NEW_EN_OTHER_ERROR_IDS_000007)
[grammar] ~51-~51: Use proper spacing conventions.
Context: ....yaml) Configuration files located in
server/configuration/` ## Transaction Types Supported - **EOA tra...
(QB_NEW_EN_OTHER_ERROR_IDS_000007)
[grammar] ~53-~53: Use proper spacing conventions.
Context: ...ration/` ## Transaction Types Supported - EOA transactions - Traditional wallet ...
(QB_NEW_EN_OTHER_ERROR_IDS_000007)
[typographical] ~55-~55: To join two clauses or set off examples, consider using an em dash.
Context: ...## Transaction Types Supported - EOA transactions - Traditional wallet transactions - **Account Abstrac...
(QB_NEW_EN_DASH_RULE_EM)
[grammar] ~56-~56: Don’t hyphenate words unnecessarily.
Context: ...itional wallet transactions - Account Abstraction - ERC-4337 smart accounts with gas sponsorship - *...
(QB_NEW_EN_OTHER_ERROR_IDS_000042)
[grammar] ~57-~57: Don’t hyphenate words unnecessarily.
Context: ...smart accounts with gas sponsorship - EIP-7702 - Delegated transaction execution ## Key Developme...
(QB_NEW_EN_OTHER_ERROR_IDS_000042)
[grammar] ~57-~57: Use proper spacing conventions.
Context: ...7702** - Delegated transaction execution ## Key Development Areas ### API Routes Lo...
(QB_NEW_EN_OTHER_ERROR_IDS_000007)
[grammar] ~59-~59: Use proper spacing conventions.
Context: ...tion execution ## Key Development Areas ### API Routes Located in `server/src/http/r...
(QB_NEW_EN_OTHER_ERROR_IDS_000007)
[typographical] ~62-~62: To join two clauses or set off examples, consider using an em dash.
Context: ...opment Areas ### API Routes Located in server/src/http/routes/
- follows RESTful patterns with OpenAPI documenta...
(QB_NEW_EN_DASH_RULE_EM)
[grammar] ~62-~62: Place a period at the end of declarative sentences.
Context: ...Tful patterns with OpenAPI documentation ### Background Jobs Implemented in `executor...
(QB_NEW_EN_OTHER_ERROR_IDS_000178)
[typographical] ~65-~65: To join two clauses or set off examples, consider using an em dash.
Context: ...ion ### Background Jobs Implemented in executors/src/
- uses TWMQ for reliable job processing with R...
(QB_NEW_EN_DASH_RULE_EM)
[grammar] ~65-~65: Place a period at the end of declarative sentences.
Context: ...le job processing with Redis persistence ### Blockchain Core core/src/
contains cha...
(QB_NEW_EN_OTHER_ERROR_IDS_000178)
[grammar] ~67-~67: There might be a mistake here.
Context: ...h Redis persistence ### Blockchain Core core/src/
contains chain management, transaction b...
(QB_NEW_EN_OTHER)
[grammar] ~68-~68: Place a period at the end of declarative sentences.
Context: ...tion building, and UserOperation support ### Account Abstraction aa-core/src/
imple...
(QB_NEW_EN_OTHER_ERROR_IDS_000178)
[grammar] ~70-~70: There might be a mistake here.
Context: ...eration support ### Account Abstraction aa-core/src/
implements complete ERC-4337 flow includ...
(QB_NEW_EN_OTHER)
[grammar] ~71-~71: There might be a mistake here.
Context: ...core/src/` implements complete ERC-4337 flow including bundler integration ## Error...
(QB_NEW_EN_OTHER)
[grammar] ~71-~71: Place a period at the end of declarative sentences.
Context: ...-4337 flow including bundler integration ## Error Handling Uses comprehensive error...
(QB_NEW_EN_OTHER_ERROR_IDS_000178)
[grammar] ~73-~73: Use proper spacing conventions.
Context: ...g bundler integration ## Error Handling Uses comprehensive error types with cont...
(QB_NEW_EN_OTHER_ERROR_IDS_000007)
[grammar] ~75-~75: Use proper spacing conventions.
Context: ...tructured and logged with tracing spans. ## Testing Integration tests require Redis...
(QB_NEW_EN_OTHER_ERROR_IDS_000007)
[grammar] ~77-~77: Use proper spacing conventions.
Context: ...d logged with tracing spans. ## Testing Integration tests require Redis. Tests c...
(QB_NEW_EN_OTHER_ERROR_IDS_000007)
[grammar] ~79-~79: Use proper spacing conventions.
Context: ...transaction building, and API endpoints. ## Production Features - Horizontal scalin...
(QB_NEW_EN_OTHER_ERROR_IDS_000007)
[grammar] ~81-~81: Use proper spacing conventions.
Context: ...d API endpoints. ## Production Features - Horizontal scaling via shared Redis back...
(QB_NEW_EN_OTHER_ERROR_IDS_000007)
🔇 Additional comments (14)
executors/src/lib.rs (1)
4-4
: Module exposure looks good
pub mod kafka_integration;
cleanly surfaces the new functionality; no further remarks.server/src/lib.rs (1)
5-5
: Kafka module export acknowledgedExposing
pub mod kafka;
is straightforward and introduces no immediate concerns.server/Cargo.toml (1)
44-45
: Verify rdkafka feature set & TLS linkage
rdkafka = { version = "0.36", features = ["ssl"] }
pulls OpenSSL‐backed TLS, which requires system libraries and increases binary size. If the deployment environment only needs plaintext or SASL/PLAIN, enabling"tokio"
or"gssapi"
might be leaner. Double-check:
- The runtime TLS requirements on your target images.
- That the chosen feature set matches brokers’ auth expectations.
Please confirm or adjust features.
executors/src/external_bundler/send.rs (1)
30-30
: LGTM! Clean Kafka integration setup.The imports and struct field addition are well-structured and follow good practices for integrating event sending capabilities.
Also applies to: 38-38, 205-205
server/src/main.rs (3)
19-24
: Well-implemented environment detection and Kafka producer initialization.The environment detection provides sensible defaults and the Kafka producer creation includes proper error handling and logging. Good defensive programming practices.
Also applies to: 62-64
72-72
: Clean dependency injection pattern.The Kafka producer is properly integrated into the application components following good dependency injection practices.
Also applies to: 99-99
139-145
: Proper resource cleanup during shutdown.The Kafka producer flush during shutdown ensures proper message delivery and resource cleanup with appropriate error handling and logging.
executors/src/external_bundler/confirm.rs (1)
18-18
: LGTM! Consistent Kafka integration pattern.The imports, struct field addition, and constructor changes follow the same clean pattern as the send handler. Good consistency across the codebase.
Also applies to: 26-26, 109-109, 121-121, 130-130
server/src/queue/manager.rs (1)
57-57
: Well-structured Kafka integration in QueueManager.The dependency injection pattern is clean and the event sender creation and distribution to handlers follows good architectural practices.
Also applies to: 152-153, 161-161, 198-198
CLAUDE.md (1)
1-87
: Well-structured documentation for the codebase.This documentation provides comprehensive guidance covering development setup, architecture, configuration, and key features. It effectively contextualizes the Kafka integration changes within the broader system architecture.
executors/src/kafka_integration.rs (1)
1-49
: Clean abstraction for transaction event sending.The trait design with async methods, well-structured event types, and no-op implementation provides a solid foundation for pluggable event sending. The use of
Arc<dyn TransactionEventSender>
for shared ownership is appropriate for this use case.server/src/kafka/mod.rs (3)
15-31
: Well-designed topic enum with proper serialization.The enum correctly handles topic naming with appropriate serialization attributes and provides efficient string conversion.
81-266
: Robust Kafka producer implementation with good error handling.The implementation properly handles configuration, provides a disabled mode for when Kafka is not configured, includes comprehensive error handling and logging, and supports graceful shutdown via flush.
383-408
: Well-designed factory functions for flexible initialization.The factory functions properly handle optional Kafka configuration and provide appropriate abstractions for event sending.
// Send Kafka transaction sent event | ||
let sent_event = TransactionSentEvent { | ||
transaction_id: job.job.data.transaction_id.clone(), | ||
chain_id: job.job.data.chain_id, | ||
account_address: success_data.result.account_address, | ||
user_op_hash: success_data.result.user_op_hash.clone(), | ||
nonce: success_data.result.nonce, | ||
deployment_lock_acquired: success_data.result.deployment_lock_acquired, | ||
timestamp: SystemTime::now() | ||
.duration_since(UNIX_EPOCH) | ||
.unwrap() | ||
.as_secs(), | ||
}; | ||
|
||
self.event_sender.send_transaction_sent(sent_event).await; | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix unsafe timestamp calculation and add error handling.
The current implementation has several issues:
- Potential panic:
SystemTime::now().duration_since(UNIX_EPOCH).unwrap()
can panic if system time is before UNIX epoch - Missing error handling: Kafka event sending errors are silently ignored
Apply this diff to fix the unsafe timestamp calculation:
- timestamp: SystemTime::now()
- .duration_since(UNIX_EPOCH)
- .unwrap()
- .as_secs(),
+ timestamp: SystemTime::now()
+ .duration_since(UNIX_EPOCH)
+ .unwrap_or_else(|_| Duration::from_secs(0))
+ .as_secs(),
Consider adding error handling for the event sending:
- self.event_sender.send_transaction_sent(sent_event).await;
+ if let Err(e) = self.event_sender.send_transaction_sent(sent_event).await {
+ tracing::warn!(
+ transaction_id = %job.job.data.transaction_id,
+ error = %e,
+ "Failed to send transaction sent event to Kafka"
+ );
+ }
🤖 Prompt for AI Agents
In executors/src/external_bundler/send.rs around lines 540 to 555, the timestamp
calculation using SystemTime::now().duration_since(UNIX_EPOCH).unwrap() can
panic if system time is before the UNIX epoch, and the event sending errors are
ignored. Replace unwrap() with proper error handling to safely compute the
timestamp, for example by matching on the Result and handling the Err case.
Additionally, add error handling for the
self.event_sender.send_transaction_sent(sent_event).await call by capturing and
logging or propagating any errors instead of ignoring them.
// Send Kafka transaction confirmed event | ||
let confirmed_event = TransactionConfirmedEvent { | ||
transaction_id: job.job.data.transaction_id.clone(), | ||
user_op_hash: success_data.result.user_op_hash.clone(), | ||
receipt: success_data.result.receipt.clone(), | ||
deployment_lock_released: success_data.result.deployment_lock_released, | ||
timestamp: SystemTime::now() | ||
.duration_since(UNIX_EPOCH) | ||
.unwrap() | ||
.as_secs(), | ||
}; | ||
|
||
self.event_sender.send_transaction_confirmed(confirmed_event).await; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix unsafe timestamp calculation and add error handling.
This implementation has the same issues as in the send handler:
- Potential panic:
SystemTime::now().duration_since(UNIX_EPOCH).unwrap()
can panic - Missing error handling: Kafka event sending errors are silently ignored
Apply this diff to fix the unsafe timestamp calculation:
- timestamp: SystemTime::now()
- .duration_since(UNIX_EPOCH)
- .unwrap()
- .as_secs(),
+ timestamp: SystemTime::now()
+ .duration_since(UNIX_EPOCH)
+ .unwrap_or_else(|_| Duration::from_secs(0))
+ .as_secs(),
Consider adding error handling for the event sending:
- self.event_sender.send_transaction_confirmed(confirmed_event).await;
+ if let Err(e) = self.event_sender.send_transaction_confirmed(confirmed_event).await {
+ tracing::warn!(
+ transaction_id = %job.job.data.transaction_id,
+ error = %e,
+ "Failed to send transaction confirmed event to Kafka"
+ );
+ }
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
// Send Kafka transaction confirmed event | |
let confirmed_event = TransactionConfirmedEvent { | |
transaction_id: job.job.data.transaction_id.clone(), | |
user_op_hash: success_data.result.user_op_hash.clone(), | |
receipt: success_data.result.receipt.clone(), | |
deployment_lock_released: success_data.result.deployment_lock_released, | |
timestamp: SystemTime::now() | |
.duration_since(UNIX_EPOCH) | |
.unwrap() | |
.as_secs(), | |
}; | |
self.event_sender.send_transaction_confirmed(confirmed_event).await; | |
// Send Kafka transaction confirmed event | |
let confirmed_event = TransactionConfirmedEvent { | |
transaction_id: job.job.data.transaction_id.clone(), | |
user_op_hash: success_data.result.user_op_hash.clone(), | |
receipt: success_data.result.receipt.clone(), | |
deployment_lock_released: success_data.result.deployment_lock_released, | |
timestamp: SystemTime::now() | |
.duration_since(UNIX_EPOCH) | |
.unwrap_or_else(|_| Duration::from_secs(0)) | |
.as_secs(), | |
}; | |
if let Err(e) = self.event_sender.send_transaction_confirmed(confirmed_event).await { | |
tracing::warn!( | |
transaction_id = %job.job.data.transaction_id, | |
error = %e, | |
"Failed to send transaction confirmed event to Kafka" | |
); | |
} |
🤖 Prompt for AI Agents
In executors/src/external_bundler/confirm.rs around lines 255 to 267, the
timestamp calculation using
SystemTime::now().duration_since(UNIX_EPOCH).unwrap() can panic and lacks error
handling, and the call to send_transaction_confirmed does not handle potential
errors. Replace unwrap() with proper error handling to safely compute the
timestamp, for example by matching on the result of duration_since and handling
the error case gracefully. Additionally, capture and handle any errors returned
by send_transaction_confirmed to avoid silently ignoring failures, such as by
logging the error or retrying as appropriate.
@@ -9,6 +9,7 @@ pub struct EngineConfig { | |||
pub thirdweb: ThirdwebConfig, | |||
pub queue: QueueConfig, | |||
pub redis: RedisConfig, | |||
pub kafka: Option<KafkaConfig>, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Well-designed Kafka configuration structure.
The optional KafkaConfig field allows for gradual rollout, and the default values appear reasonable for most use cases. The configuration structure follows good practices with appropriate serde defaults.
Consider verifying that the default values are appropriate for your production environment:
Also applies to: 35-68
🌐 Web query:
What are the recommended default values for Kafka producer configuration in production environments, specifically for batch_size, buffer_memory, request_timeout, and max_retries?
💡 Result:
In production environments, Kafka producer configuration values often need to be tuned based on the workload, message size, and latency requirements. However, there are commonly recommended default or starting points for key configurations like batch.size
, buffer.memory
, request.timeout.ms
, and retries
(max retries).
Here are typical recommendations:
-
batch.size
- Default: 16 KB (16384 bytes)
- Recommended starting point: 32 KB to 64 KB
- Larger batch sizes improve throughput but increase latency. Tune based on message size and latency needs.
-
buffer.memory
- Default: 32 MB (33554432 bytes)
- Recommended starting point: 64 MB to 128 MB or more, depending on load and available JVM memory
- Sufficient buffer memory prevents producer blocking but also increases memory use.
-
request.timeout.ms
- Default: 30,000 ms (30 seconds)
- Keep at default or increase if you expect slow brokers or network
- Should be larger than
max.block.ms
anddelivery.timeout.ms
-
retries (max retries)
- Default: 0 (no retries)
- Recommended starting point: 3 to 5 retries
- This helps handle transient errors and temporary broker unavailability
Note: These values are starting recommendations and should be adjusted based on application behavior, network conditions, broker performance, and message size.
Would you like me to provide more detailed guidance on tuning these parameters for your specific production scenario?
I’d like to confirm the actual defaults in your KafkaConfig
implementation before updating the feedback. Let’s pull the struct definition and default functions:
🏁 Script executed:
#!/bin/bash
set -eux
# Show the KafkaConfig struct (around lines 35–52)
rg -C2 "struct KafkaConfig" -n server/src/config.rs
# Show all default_* functions in server/src/config.rs
rg "fn default_" -n server/src/config.rs
Length of output: 391
🏁 Script executed:
#!/bin/bash
set -eux
# Display KafkaConfig struct and default functions
sed -n '35,80p' server/src/config.rs
Length of output: 961
Adjust KafkaConfig defaults for production readiness.
After inspecting the code in server/src/config.rs, your current defaults are:
default_batch_size() -> 1000
default_buffer_memory_kb() -> 32768
(32 MB)default_request_timeout_ms() -> 5000
(5 s)default_max_retries() -> 3
Recommended starting points in production are typically:
- batch.size: 32 KB–64 KB
- buffer.memory: 64 MB–128 MB
- request.timeout.ms: 30 000 ms (default) or higher
- retries: 3–5
Only max_retries
aligns with common guidance. Please update your defaults (or document why lower values are acceptable) to avoid unexpected throughput or timeout issues:
• server/src/config.rs, fn default_batch_size (line 54)
• server/src/config.rs, fn default_buffer_memory_kb (line 58)
• server/src/config.rs, fn default_request_timeout_ms (line 62)
🤖 Prompt for AI Agents
In server/src/config.rs around lines 54 to 62, the default KafkaConfig values
for batch size, buffer memory, and request timeout are set too low for
production use. Update default_batch_size to a value between 32 KB and 64 KB,
default_buffer_memory_kb to between 65536 and 131072 (64 MB to 128 MB), and
default_request_timeout_ms to 30000 or higher. Keep max_retries as is or
increase to 5 if desired. Adjust these defaults to align with production best
practices and avoid throughput or timeout issues.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
🧹 Nitpick comments (7)
server/Cargo.toml (1)
16-16
: Local path dependency lacks a workspace feature gate
engine-aa-types
is added via a relative path only. If this crate is intended to be published later, addversion = "x.y.z", package = "engine-aa-types"
so that switching frompath
toversion
is painless and dependants do not need to change their manifests.server/src/lib.rs (1)
5-5
: Public surface grows – verify docs & privacy
pub mod kafka;
exposes the Kafka producer internals. If only theSharedKafkaProducer
alias is needed by downstream crates, consider re-exporting just that viapub use
instead of the full module to keep the API narrow.server/src/main.rs (1)
19-24
: Consider making environment detection more explicit.Defaulting to "local" environment might mask configuration issues in production deployments. Consider requiring explicit environment configuration or using a more conservative default.
- let environment: config::Environment = std::env::var("APP_ENVIRONMENT") - .unwrap_or_else(|_| "local".into()) - .try_into() - .expect("Failed to parse APP_ENVIRONMENT"); + let environment: config::Environment = std::env::var("APP_ENVIRONMENT") + .map_err(|_| anyhow::anyhow!("APP_ENVIRONMENT must be set")) + .and_then(|env| env.try_into().map_err(|e| anyhow::anyhow!("Invalid APP_ENVIRONMENT: {}", e)))?;executors/src/kafka_integration.rs (1)
7-11
: Consider adding error handling to trait methods.The
send_transaction_sent
andsend_transaction_confirmed
methods don't returnResult
, making it impossible for callers to know if event sending failed. While fire-and-forget might be acceptable for non-critical events, consider returningResult<(), Box<dyn Error>>
to improve error observability.server/src/kafka/mod.rs (3)
15-79
: Consider reducing duplication between message types.The
TransactionSentMessage
andTransactionConfirmedMessage
structs duplicate the event structs fromkafka_integration.rs
. Consider reusing the existing types or creating a shared module.The
TransactionWebhookEvent
structure with many optional fields that are mutually exclusive (sent vs confirmed) could be error-prone. Consider using an enum with variants for different event types instead.-pub struct TransactionWebhookEvent { - // Core transaction data (always present) - pub transaction_id: String, - // ... common fields ... - - // Send-specific fields (null for confirm events) - pub user_operation_sent: Option<VersionedUserOp>, - pub deployment_lock_acquired: Option<bool>, - - // Confirm-specific fields (null for send events) - pub receipt: Option<UserOperationReceipt>, - pub deployment_lock_released: Option<bool>, - // ... -} +pub enum TransactionWebhookEvent { + Sent { + transaction_id: String, + // ... common fields ... + user_operation_sent: VersionedUserOp, + deployment_lock_acquired: bool, + }, + Confirmed { + transaction_id: String, + // ... common fields ... + receipt: UserOperationReceipt, + deployment_lock_released: bool, + // ... + } +}
138-254
: Error handling logs but doesn't propagate failures.The current implementation logs errors but doesn't propagate them to callers. While this matches the trait design, it makes it difficult to monitor event sending failures programmatically. Consider:
- Adding metrics/counters for send failures
- Implementing a dead letter queue for failed messages
- Adding alerting based on error logs
356-381
: Trait implementation requires redundant type conversions.The implementation converts between nearly identical
TransactionSentEvent
/TransactionSentMessage
types. This duplication could be avoided by using the same types throughout or implementingFrom
traits.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
Cargo.lock
is excluded by!**/*.lock
📒 Files selected for processing (13)
CLAUDE.md
(1 hunks)executors/Cargo.toml
(1 hunks)executors/src/external_bundler/confirm.rs
(5 hunks)executors/src/external_bundler/send.rs
(3 hunks)executors/src/kafka_integration.rs
(1 hunks)executors/src/lib.rs
(1 hunks)server/Cargo.toml
(2 hunks)server/src/config.rs
(2 hunks)server/src/execution_router/mod.rs
(2 hunks)server/src/kafka/mod.rs
(1 hunks)server/src/lib.rs
(1 hunks)server/src/main.rs
(5 hunks)server/src/queue/manager.rs
(4 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (1)
server/src/kafka/mod.rs (2)
server/src/config.rs (2)
config
(138-138)as_str
(154-160)executors/src/kafka_integration.rs (4)
send_transaction_sent
(9-9)send_transaction_sent
(40-42)send_transaction_confirmed
(10-10)send_transaction_confirmed
(44-46)
🪛 LanguageTool
CLAUDE.md
[grammar] ~1-~1: Use proper spacing conventions.
Context: # CLAUDE.md This file provides guidance to Claude Co...
(QB_NEW_EN_OTHER_ERROR_IDS_000007)
[grammar] ~3-~3: Use proper spacing conventions.
Context: ...en working with code in this repository. ## Development Commands ### Build and Test...
(QB_NEW_EN_OTHER_ERROR_IDS_000007)
[grammar] ~5-~5: Use proper spacing conventions.
Context: ...his repository. ## Development Commands ### Build and Test - cargo build --release
...
(QB_NEW_EN_OTHER_ERROR_IDS_000007)
[grammar] ~8-~8: There might be a mistake here.
Context: ...ild --release- Build production binary -
cargo test` - Run all tests including Redis integra...
(QB_NEW_EN_OTHER)
[grammar] ~9-~9: There might be a mistake here.
Context: ...duction binary - cargo test
- Run all tests including Redis integration tests - `ca...
(QB_NEW_EN_OTHER)
[grammar] ~9-~9: There might be a mistake here.
Context: ... tests including Redis integration tests - cargo nextest run -p twmq --profile ci
- Run specific component tests - `RUST_...
(QB_NEW_EN_OTHER)
[grammar] ~10-~10: There might be a mistake here.
Context: ...ofile ci- Run specific component tests -
RUST_LOG=debug cargo run` - Run server with debug logging ### De...
(QB_NEW_EN_OTHER)
[grammar] ~11-~11: Place a period at the end of declarative sentences.
Context: ...rgo run` - Run server with debug logging ### Development Setup Redis is required for ...
(QB_NEW_EN_OTHER_ERROR_IDS_000178)
[grammar] ~14-~14: Use proper spacing conventions.
Context: ...Setup Redis is required for development: bash docker run -d --name redis -p 6379:6379 redis:7-alpine
Required environment variables: ```bash ...
(QB_NEW_EN_OTHER_ERROR_IDS_000007)
[grammar] ~19-~19: Use proper spacing conventions.
Context: ...ine Required environment variables:
bash export APP__THIRDWEB__SECRET="your_secret_key" export APP__THIRDWEB__CLIENT_ID="your_client_id" ``` ## Architecture Overview This is a **Rust ...
(QB_NEW_EN_OTHER_ERROR_IDS_000007)
[grammar] ~25-~25: Use proper spacing conventions.
Context: ...client_id" ``` ## Architecture Overview This is a Rust workspace with 7 crat...
(QB_NEW_EN_OTHER_ERROR_IDS_000007)
[grammar] ~27-~27: Use proper spacing conventions.
Context: ...g blockchain transaction infrastructure: - server/
- Main HTTP API server (Axum-based REST...
(QB_NEW_EN_OTHER_ERROR_IDS_000007)
[grammar] ~35-~35: Use proper spacing conventions.
Context: ...eb service integrations (Vault SDK, IAW) ### Key Technologies - Axum for HTTP ser...
(QB_NEW_EN_OTHER_ERROR_IDS_000007)
[grammar] ~42-~42: Use proper spacing conventions.
Context: ... Vault SDK for secure key management ## Configuration System Hierarchical confi...
(QB_NEW_EN_OTHER_ERROR_IDS_000007)
[grammar] ~44-~44: Use proper spacing conventions.
Context: ... key management ## Configuration System Hierarchical configuration priority: 1. ...
(QB_NEW_EN_OTHER_ERROR_IDS_000007)
[grammar] ~49-~49: Use proper spacing conventions.
Context: ...yaml) 3. Base YAML (
server_base.yaml) Configuration files located in
server/c...
(QB_NEW_EN_OTHER_ERROR_IDS_000007)
[grammar] ~51-~51: Use proper spacing conventions.
Context: ....yaml) Configuration files located in
server/configuration/` ## Transaction Types Supported - **EOA tra...
(QB_NEW_EN_OTHER_ERROR_IDS_000007)
[grammar] ~53-~53: Use proper spacing conventions.
Context: ...ration/` ## Transaction Types Supported - EOA transactions - Traditional wallet ...
(QB_NEW_EN_OTHER_ERROR_IDS_000007)
[typographical] ~55-~55: To join two clauses or set off examples, consider using an em dash.
Context: ...## Transaction Types Supported - EOA transactions - Traditional wallet transactions - **Account Abstrac...
(QB_NEW_EN_DASH_RULE_EM)
[grammar] ~56-~56: Don’t hyphenate words unnecessarily.
Context: ...itional wallet transactions - Account Abstraction - ERC-4337 smart accounts with gas sponsorship - *...
(QB_NEW_EN_OTHER_ERROR_IDS_000042)
[grammar] ~57-~57: Don’t hyphenate words unnecessarily.
Context: ...smart accounts with gas sponsorship - EIP-7702 - Delegated transaction execution ## Key Developme...
(QB_NEW_EN_OTHER_ERROR_IDS_000042)
[grammar] ~57-~57: Use proper spacing conventions.
Context: ...7702** - Delegated transaction execution ## Key Development Areas ### API Routes Lo...
(QB_NEW_EN_OTHER_ERROR_IDS_000007)
[grammar] ~59-~59: Use proper spacing conventions.
Context: ...tion execution ## Key Development Areas ### API Routes Located in `server/src/http/r...
(QB_NEW_EN_OTHER_ERROR_IDS_000007)
[typographical] ~62-~62: To join two clauses or set off examples, consider using an em dash.
Context: ...opment Areas ### API Routes Located in server/src/http/routes/
- follows RESTful patterns with OpenAPI documenta...
(QB_NEW_EN_DASH_RULE_EM)
[grammar] ~62-~62: Place a period at the end of declarative sentences.
Context: ...Tful patterns with OpenAPI documentation ### Background Jobs Implemented in `executor...
(QB_NEW_EN_OTHER_ERROR_IDS_000178)
[typographical] ~65-~65: To join two clauses or set off examples, consider using an em dash.
Context: ...ion ### Background Jobs Implemented in executors/src/
- uses TWMQ for reliable job processing with R...
(QB_NEW_EN_DASH_RULE_EM)
[grammar] ~65-~65: Place a period at the end of declarative sentences.
Context: ...le job processing with Redis persistence ### Blockchain Core core/src/
contains cha...
(QB_NEW_EN_OTHER_ERROR_IDS_000178)
[grammar] ~67-~67: There might be a mistake here.
Context: ...h Redis persistence ### Blockchain Core core/src/
contains chain management, transaction b...
(QB_NEW_EN_OTHER)
[grammar] ~68-~68: Place a period at the end of declarative sentences.
Context: ...tion building, and UserOperation support ### Account Abstraction aa-core/src/
imple...
(QB_NEW_EN_OTHER_ERROR_IDS_000178)
[grammar] ~70-~70: There might be a mistake here.
Context: ...eration support ### Account Abstraction aa-core/src/
implements complete ERC-4337 flow includ...
(QB_NEW_EN_OTHER)
[grammar] ~71-~71: There might be a mistake here.
Context: ...core/src/` implements complete ERC-4337 flow including bundler integration ## Error...
(QB_NEW_EN_OTHER)
[grammar] ~71-~71: Place a period at the end of declarative sentences.
Context: ...-4337 flow including bundler integration ## Error Handling Uses comprehensive error...
(QB_NEW_EN_OTHER_ERROR_IDS_000178)
[grammar] ~73-~73: Use proper spacing conventions.
Context: ...g bundler integration ## Error Handling Uses comprehensive error types with cont...
(QB_NEW_EN_OTHER_ERROR_IDS_000007)
[grammar] ~75-~75: Use proper spacing conventions.
Context: ...tructured and logged with tracing spans. ## Testing Integration tests require Redis...
(QB_NEW_EN_OTHER_ERROR_IDS_000007)
[grammar] ~77-~77: Use proper spacing conventions.
Context: ...d logged with tracing spans. ## Testing Integration tests require Redis. Tests c...
(QB_NEW_EN_OTHER_ERROR_IDS_000007)
[grammar] ~79-~79: Use proper spacing conventions.
Context: ...transaction building, and API endpoints. ## Production Features - Horizontal scalin...
(QB_NEW_EN_OTHER_ERROR_IDS_000007)
[grammar] ~81-~81: Use proper spacing conventions.
Context: ...d API endpoints. ## Production Features - Horizontal scaling via shared Redis back...
(QB_NEW_EN_OTHER_ERROR_IDS_000007)
🔇 Additional comments (16)
server/Cargo.toml (1)
44-45
: Native dependency footprint & feature set need confirmation
rdkafka 0.36
pulls in the C librarylibrdkafka
and OpenSSL at build-time.
- CI images / containers must have the headers pre-installed, otherwise the build will fail.
- Consider adding
["cmake-build"]
or disabling default features if deterministic builds are required.- If your runtime only needs plaintext or SASL / plain, the
"ssl"
feature is unnecessary and adds the OpenSSL surface.Please double-check the target environments and trim the feature set accordingly.
executors/Cargo.toml (1)
25-25
: Avoid multipleasync-trait
versions across workspace
async-trait
is now declared here and inserver/Cargo.toml
. Ensure every crate uses the identical version (0.1.83) to prevent duplicate crate instances in the final binary, which bloat size and complicate diagnostics.executors/src/lib.rs (1)
4-4
: Module exposure looks good
pub mod kafka_integration;
cleanly re-exports the new abstraction without leaking implementation details.server/src/execution_router/mod.rs (1)
38-38
: Arc alias import is fineImporting
SharedKafkaProducer
keeps the type concise and avoids repeatingArc<…>
.server/src/main.rs (1)
139-145
: LGTM - Proper Kafka producer shutdown handling.The Kafka producer flush during shutdown is well-implemented with appropriate error logging. This ensures events are properly sent before the application terminates.
executors/src/external_bundler/confirm.rs (1)
116-132
: LGTM - Proper constructor integration.The constructor properly integrates the event_sender parameter and field initialization. The dependency injection pattern is well-implemented.
server/src/queue/manager.rs (2)
152-154
: LGTM - Clean event sender integration.The event sender creation from the Kafka producer is well-implemented and follows the dependency injection pattern consistently used throughout the codebase.
156-162
: LGTM - Proper event sender injection into handlers.The event sender is correctly passed to both the UserOpConfirmationHandler and ExternalBundlerSendHandler constructors. The cloning pattern ensures proper sharing of the event sender across handlers.
Also applies to: 190-199
server/src/config.rs (3)
35-52
: LGTM - Well-structured Kafka configuration.The KafkaConfig struct includes appropriate fields for Kafka connection and tuning. The optional integration allows for conditional Kafka usage without breaking existing deployments.
54-69
: LGTM - Reasonable default values for Kafka configuration.The default values are well-chosen for production use:
- 32MB buffer memory provides good throughput
- 5 second timeout is appropriate for most network conditions
- 3 retries provides good reliability without excessive delays
- 1000 batch size balances throughput and latency
37-39
: Update Kafka credential handling and documentationWe already load configuration values (including
kafka.username
andkafka.password
) from environment variables viaconfig::Environment::with_prefix("app").separator("__")
, so you can provide credentials at runtime using:
- APP__KAFKA__URL
- APP__KAFKA__USERNAME
- APP__KAFKA__PASSWORD
Make sure not to commit plaintext credentials in your YAML files. For production deployments, inject secrets via your orchestrator or integrate a dedicated secret management solution (e.g., HashiCorp Vault, AWS Secrets Manager) and document the expected environment variables in your README or deployment guide.
CLAUDE.md (1)
1-87
: Well-structured documentation that aligns with the Kafka integration changes.The documentation provides comprehensive guidance covering development setup, architecture overview, and configuration. The references to the Kafka integration for transaction event messaging accurately reflect the implementation in this PR.
executors/src/kafka_integration.rs (2)
13-33
: Event structures are well-designed with appropriate fields.The
TransactionSentEvent
andTransactionConfirmedEvent
structs have clear separation of concerns and use appropriate types. The serde attributes ensure proper JSON serialization.
35-49
: Clean no-op implementation and type alias.The
NoOpEventSender
provides a proper fallback when Kafka is disabled, and theSharedEventSender
type alias enables clean dependency injection throughout the system.server/src/kafka/mod.rs (2)
86-136
: Well-configured Kafka producer with appropriate settings.The producer configuration properly combines hardcoded reliability settings (idempotence, acks) with configurable performance settings. The SASL_SSL authentication and the disabled producer pattern are well implemented.
383-408
: Factory functions are well-designed.The factory functions properly handle optional configuration and create appropriate implementations (enabled Kafka or no-op). The pattern enables clean dependency injection.
pub kafka_producer: SharedKafkaProducer, | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Adding a new required field is a breaking change
kafka_producer
is non-optional. Every ExecutionRouter { … }
instantiation (probably in server/src/main.rs
plus tests) must now supply this field or compilation will fail. Run a workspace-wide search:
and update all builders/constructors.
🏁 Script executed:
rg 'ExecutionRouter::new|ExecutionRouter {' -n
Length of output: 188
Let’s correct the search to avoid regex errors and find all instantiations and constructors:
🏁 Script executed:
#!/bin/bash
# Search for struct literal uses
rg -n -F 'ExecutionRouter {'
# Search for any new() constructors
rg -n -F 'ExecutionRouter::new'
Length of output: 257
Add missing kafka_producer
field to all ExecutionRouter instantiations
The ExecutionRouter
struct now requires a non-optional kafka_producer
field. Any literal construction must supply this field or compilation will fail.
Locations to update:
- server/src/main.rs:88
let execution_router = ExecutionRouter { // …existing fields… // ← add: kafka_producer: /* SharedKafkaProducer instance */, };
Please review the entire codebase for any other ExecutionRouter { … }
usages and ensure each includes the new kafka_producer
field.
🤖 Prompt for AI Agents
In server/src/execution_router/mod.rs around lines 51 to 52, the ExecutionRouter
struct now includes a mandatory kafka_producer field. You need to update all
instances where ExecutionRouter is instantiated, such as in server/src/main.rs
at line 88, by adding the kafka_producer field with a valid SharedKafkaProducer
instance. Search the entire codebase for all ExecutionRouter constructions and
add the kafka_producer field to each to ensure successful compilation.
// Send Kafka transaction sent event | ||
let sent_event = TransactionSentEvent { | ||
transaction_id: job.job.data.transaction_id.clone(), | ||
chain_id: job.job.data.chain_id, | ||
account_address: success_data.result.account_address, | ||
user_op_hash: success_data.result.user_op_hash.clone(), | ||
nonce: success_data.result.nonce, | ||
deployment_lock_acquired: success_data.result.deployment_lock_acquired, | ||
timestamp: SystemTime::now() | ||
.duration_since(UNIX_EPOCH) | ||
.unwrap() | ||
.as_secs(), | ||
}; | ||
|
||
self.event_sender.send_transaction_sent(sent_event).await; | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add error handling for event sending and timestamp generation.
The Kafka event sending lacks error handling, which could cause issues in production:
- The
send_transaction_sent
call is async but errors are not handled - The
SystemTime::now().duration_since(UNIX_EPOCH).unwrap()
could panic
Apply this diff to add proper error handling:
- // Send Kafka transaction sent event
- let sent_event = TransactionSentEvent {
- transaction_id: job.job.data.transaction_id.clone(),
- chain_id: job.job.data.chain_id,
- account_address: success_data.result.account_address,
- user_op_hash: success_data.result.user_op_hash.clone(),
- nonce: success_data.result.nonce,
- deployment_lock_acquired: success_data.result.deployment_lock_acquired,
- timestamp: SystemTime::now()
- .duration_since(UNIX_EPOCH)
- .unwrap()
- .as_secs(),
- };
-
- self.event_sender.send_transaction_sent(sent_event).await;
+ // Send Kafka transaction sent event
+ let timestamp = SystemTime::now()
+ .duration_since(UNIX_EPOCH)
+ .unwrap_or_else(|_| Duration::from_secs(0))
+ .as_secs();
+
+ let sent_event = TransactionSentEvent {
+ transaction_id: job.job.data.transaction_id.clone(),
+ chain_id: job.job.data.chain_id,
+ account_address: success_data.result.account_address,
+ user_op_hash: success_data.result.user_op_hash.clone(),
+ nonce: success_data.result.nonce,
+ deployment_lock_acquired: success_data.result.deployment_lock_acquired,
+ timestamp,
+ };
+
+ if let Err(e) = self.event_sender.send_transaction_sent(sent_event).await {
+ tracing::error!(
+ transaction_id = %job.job.data.transaction_id,
+ error = %e,
+ "Failed to send transaction sent event to Kafka"
+ );
+ }
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
// Send Kafka transaction sent event | |
let sent_event = TransactionSentEvent { | |
transaction_id: job.job.data.transaction_id.clone(), | |
chain_id: job.job.data.chain_id, | |
account_address: success_data.result.account_address, | |
user_op_hash: success_data.result.user_op_hash.clone(), | |
nonce: success_data.result.nonce, | |
deployment_lock_acquired: success_data.result.deployment_lock_acquired, | |
timestamp: SystemTime::now() | |
.duration_since(UNIX_EPOCH) | |
.unwrap() | |
.as_secs(), | |
}; | |
self.event_sender.send_transaction_sent(sent_event).await; | |
// Send Kafka transaction sent event | |
let timestamp = SystemTime::now() | |
.duration_since(UNIX_EPOCH) | |
.unwrap_or_else(|_| Duration::from_secs(0)) | |
.as_secs(); | |
let sent_event = TransactionSentEvent { | |
transaction_id: job.job.data.transaction_id.clone(), | |
chain_id: job.job.data.chain_id, | |
account_address: success_data.result.account_address, | |
user_op_hash: success_data.result.user_op_hash.clone(), | |
nonce: success_data.result.nonce, | |
deployment_lock_acquired: success_data.result.deployment_lock_acquired, | |
timestamp, | |
}; | |
if let Err(e) = self.event_sender.send_transaction_sent(sent_event).await { | |
tracing::error!( | |
transaction_id = %job.job.data.transaction_id, | |
error = %e, | |
"Failed to send transaction sent event to Kafka" | |
); | |
} |
🤖 Prompt for AI Agents
In executors/src/external_bundler/send.rs around lines 540 to 555, add error
handling for the async call to send_transaction_sent by awaiting the result and
handling any errors returned. Also, replace the unwrap on
SystemTime::now().duration_since(UNIX_EPOCH) with proper error handling to avoid
panics, such as using a match or if let to handle the Result safely and log or
handle the error case appropriately.
// Send Kafka transaction confirmed event | ||
let confirmed_event = TransactionConfirmedEvent { | ||
transaction_id: job.job.data.transaction_id.clone(), | ||
user_op_hash: success_data.result.user_op_hash.clone(), | ||
receipt: success_data.result.receipt.clone(), | ||
deployment_lock_released: success_data.result.deployment_lock_released, | ||
timestamp: SystemTime::now() | ||
.duration_since(UNIX_EPOCH) | ||
.unwrap() | ||
.as_secs(), | ||
}; | ||
|
||
self.event_sender.send_transaction_confirmed(confirmed_event).await; | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add error handling for event sending and timestamp generation.
Similar to the send handler, this event sending lacks proper error handling:
- The
send_transaction_confirmed
call doesn't handle errors - The timestamp generation could panic on
.unwrap()
Apply this diff to add proper error handling:
- // Send Kafka transaction confirmed event
- let confirmed_event = TransactionConfirmedEvent {
- transaction_id: job.job.data.transaction_id.clone(),
- user_op_hash: success_data.result.user_op_hash.clone(),
- receipt: success_data.result.receipt.clone(),
- deployment_lock_released: success_data.result.deployment_lock_released,
- timestamp: SystemTime::now()
- .duration_since(UNIX_EPOCH)
- .unwrap()
- .as_secs(),
- };
-
- self.event_sender.send_transaction_confirmed(confirmed_event).await;
+ // Send Kafka transaction confirmed event
+ let timestamp = SystemTime::now()
+ .duration_since(UNIX_EPOCH)
+ .unwrap_or_else(|_| Duration::from_secs(0))
+ .as_secs();
+
+ let confirmed_event = TransactionConfirmedEvent {
+ transaction_id: job.job.data.transaction_id.clone(),
+ user_op_hash: success_data.result.user_op_hash.clone(),
+ receipt: success_data.result.receipt.clone(),
+ deployment_lock_released: success_data.result.deployment_lock_released,
+ timestamp,
+ };
+
+ if let Err(e) = self.event_sender.send_transaction_confirmed(confirmed_event).await {
+ tracing::error!(
+ transaction_id = %job.job.data.transaction_id,
+ error = %e,
+ "Failed to send transaction confirmed event to Kafka"
+ );
+ }
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
// Send Kafka transaction confirmed event | |
let confirmed_event = TransactionConfirmedEvent { | |
transaction_id: job.job.data.transaction_id.clone(), | |
user_op_hash: success_data.result.user_op_hash.clone(), | |
receipt: success_data.result.receipt.clone(), | |
deployment_lock_released: success_data.result.deployment_lock_released, | |
timestamp: SystemTime::now() | |
.duration_since(UNIX_EPOCH) | |
.unwrap() | |
.as_secs(), | |
}; | |
self.event_sender.send_transaction_confirmed(confirmed_event).await; | |
// Send Kafka transaction confirmed event | |
let timestamp = SystemTime::now() | |
.duration_since(UNIX_EPOCH) | |
.unwrap_or_else(|_| Duration::from_secs(0)) | |
.as_secs(); | |
let confirmed_event = TransactionConfirmedEvent { | |
transaction_id: job.job.data.transaction_id.clone(), | |
user_op_hash: success_data.result.user_op_hash.clone(), | |
receipt: success_data.result.receipt.clone(), | |
deployment_lock_released: success_data.result.deployment_lock_released, | |
timestamp, | |
}; | |
if let Err(e) = self.event_sender.send_transaction_confirmed(confirmed_event).await { | |
tracing::error!( | |
transaction_id = %job.job.data.transaction_id, | |
error = %e, | |
"Failed to send transaction confirmed event to Kafka" | |
); | |
} |
🤖 Prompt for AI Agents
In executors/src/external_bundler/confirm.rs around lines 255 to 268, add error
handling for the send_transaction_confirmed call by awaiting the result and
handling any errors returned. Also, replace the timestamp generation's unwrap
with proper error handling to avoid panics, such as using match or if let to
handle the Result from duration_since(UNIX_EPOCH) safely and log or propagate
errors as appropriate.
impl Drop for KafkaProducer { | ||
fn drop(&mut self) { | ||
if self.is_enabled() { | ||
tracing::debug!("Dropping Kafka producer"); | ||
} | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Drop implementation should consider flushing pending messages.
The current Drop
implementation only logs but doesn't flush pending messages. This could lead to message loss during shutdown. Consider calling flush or at least logging a warning about potential message loss.
impl Drop for KafkaProducer {
fn drop(&mut self) {
if self.is_enabled() {
- tracing::debug!("Dropping Kafka producer");
+ tracing::warn!("Dropping Kafka producer - pending messages may be lost. Consider calling flush() before dropping.");
+ // Note: Can't call async flush() in Drop, but rdkafka's Drop will attempt to flush with a timeout
}
}
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
impl Drop for KafkaProducer { | |
fn drop(&mut self) { | |
if self.is_enabled() { | |
tracing::debug!("Dropping Kafka producer"); | |
} | |
} | |
} | |
impl Drop for KafkaProducer { | |
fn drop(&mut self) { | |
if self.is_enabled() { | |
tracing::warn!("Dropping Kafka producer - pending messages may be lost. Consider calling flush() before dropping."); | |
// Note: Can't call async flush() in Drop, but rdkafka's Drop will attempt to flush with a timeout | |
} | |
} | |
} |
🤖 Prompt for AI Agents
In server/src/kafka/mod.rs around lines 268 to 274, the Drop implementation for
KafkaProducer only logs a debug message without flushing pending messages,
risking message loss on shutdown. Modify the drop method to call a flush
function on the producer to ensure all pending messages are sent before
dropping, or if flush is not feasible, add a warning log indicating potential
message loss during shutdown.
6dfeaa7
to
1625d34
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (1)
server/src/kafka/mod.rs (1)
293-299
: Drop implementation should flush pending messages.The current Drop implementation only logs but doesn't flush pending messages, which could lead to message loss during shutdown. While we can't call async flush() in Drop, we should at least warn about potential message loss.
impl Drop for KafkaProducer { fn drop(&mut self) { if self.is_enabled() { - tracing::debug!("Dropping Kafka producer"); + tracing::warn!("Dropping Kafka producer - pending messages may be lost. Consider calling flush() before dropping."); + // Note: Can't call async flush() in Drop, but rdkafka's Drop will attempt to flush with a timeout } } }
🧹 Nitpick comments (1)
CLAUDE.md (1)
25-87
: Consider adding Kafka integration documentation.This comprehensive documentation file provides excellent guidance for developers, but it doesn't mention the new Kafka integration that was introduced in this PR. Consider adding a section about Kafka configuration and event streaming capabilities, especially since this is a significant architectural addition for real-time transaction event publishing.
You could add a section like:
## Key Technologies - **Axum** for HTTP server - **Alloy** for Ethereum interactions - **Redis** for job queue and state - **Tokio** for async runtime - **Vault SDK** for secure key management + **Kafka** for real-time event streamingAnd perhaps a dedicated section explaining the event streaming architecture.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
Cargo.lock
is excluded by!**/*.lock
📒 Files selected for processing (13)
CLAUDE.md
(1 hunks)executors/Cargo.toml
(1 hunks)executors/src/external_bundler/confirm.rs
(5 hunks)executors/src/external_bundler/send.rs
(3 hunks)executors/src/kafka_integration.rs
(1 hunks)executors/src/lib.rs
(1 hunks)server/Cargo.toml
(2 hunks)server/src/config.rs
(2 hunks)server/src/execution_router/mod.rs
(2 hunks)server/src/kafka/mod.rs
(1 hunks)server/src/lib.rs
(1 hunks)server/src/main.rs
(5 hunks)server/src/queue/manager.rs
(4 hunks)
🚧 Files skipped from review as they are similar to previous changes (10)
- executors/Cargo.toml
- executors/src/lib.rs
- server/src/execution_router/mod.rs
- server/src/lib.rs
- server/Cargo.toml
- server/src/main.rs
- executors/src/external_bundler/confirm.rs
- executors/src/external_bundler/send.rs
- server/src/config.rs
- server/src/queue/manager.rs
🧰 Additional context used
🪛 LanguageTool
CLAUDE.md
[grammar] ~1-~1: Use proper spacing conventions.
Context: # CLAUDE.md This file provides guidance to Claude Co...
(QB_NEW_EN_OTHER_ERROR_IDS_000007)
[grammar] ~3-~3: Use correct spacing
Context: ...ai/code) when working with code in this repository. ## Development Commands ### Build and Tes...
(QB_NEW_EN_OTHER_ERROR_IDS_5)
[grammar] ~5-~5: Use correct spacing
Context: ...ode in this repository. ## Development Commands ### Build and Test - `cargo build --release...
(QB_NEW_EN_OTHER_ERROR_IDS_5)
[grammar] ~7-~7:
Context: ... ## Development Commands ### Build and Test - cargo build --release
- Build production binary - cargo test
...
(QB_NEW_EN_OTHER_ERROR_IDS_)
[grammar] ~8-~8: There might be a mistake here.
Context: ...rgo build --release- Build production binary -
cargo test` - Run all tests including Redis integra...
(QB_NEW_EN_OTHER)
[grammar] ~9-~9: There might be a mistake here.
Context: ...duction binary - cargo test
- Run all tests including Redis integration tests - `ca...
(QB_NEW_EN_OTHER)
[grammar] ~9-~9: There might be a mistake here.
Context: ...n all tests including Redis integration tests - cargo nextest run -p twmq --profile ci
- Run specific component tests - `RUST_...
(QB_NEW_EN_OTHER)
[grammar] ~10-~10: There might be a mistake here.
Context: ... --profile ci- Run specific component tests -
RUST_LOG=debug cargo run` - Run server with debug logging ### De...
(QB_NEW_EN_OTHER)
[grammar] ~11-~11: There might be a mistake here.
Context: ...ebug cargo run` - Run server with debug logging ### Development Setup Redis is required for...
(QB_NEW_EN_OTHER)
[grammar] ~14-~14: Use correct spacing
Context: ...Development Setup Redis is required for development: bash docker run -d --name redis -p 6379:6379 redis:7-alpine
Required environment variables: ```bash...
(QB_NEW_EN_OTHER_ERROR_IDS_5)
[grammar] ~19-~19: Use correct spacing
Context: ...edis:7-alpine Required environment variables:
bash export APP__THIRDWEB__SECRET="your_secret_key" export APP__THIRDWEB__CLIENT_ID="your_client_id" ``` ## Architecture Overview This is a **Rust...
(QB_NEW_EN_OTHER_ERROR_IDS_5)
[grammar] ~25-~25: Use correct spacing
Context: ...D="your_client_id" ``` ## Architecture Overview This is a Rust workspace with 7 cra...
(QB_NEW_EN_OTHER_ERROR_IDS_5)
[grammar] ~27-~27: There might be a problem here.
Context: ...crates providing blockchain transaction infrastructure: - server/
- Main HTTP API server (Axum-based REST A...
(QB_NEW_EN_MERGED_MATCH)
[grammar] ~29-~29:
Context: ...erver (Axum-based REST API with OpenAPI docs) - core/
- Core blockchain functionality (chain ma...
(QB_NEW_EN_OTHER_ERROR_IDS_)
[grammar] ~30-~30:
Context: ...nality (chain management, transactions, UserOps) - aa-core/
- Account Abstraction engine (ERC-4337 v0...
(QB_NEW_EN_OTHER_ERROR_IDS_)
[grammar] ~31-~31:
Context: ... Abstraction engine (ERC-4337 v0.6/v0.7 support) - aa-types/
- Account Abstraction type definitions - ...
(QB_NEW_EN_OTHER_ERROR_IDS_)
[grammar] ~32-~32:
Context: ...aa-types/** - Account Abstraction type definitions - **
executors/`** - Background job handlers (webhooks, tran...
(QB_NEW_EN_OTHER_ERROR_IDS_)
[grammar] ~33-~33:
Context: ...und job handlers (webhooks, transaction confirmation) - twmq/
- Thirdweb Message Queue (Redis-backed jo...
(QB_NEW_EN_OTHER_ERROR_IDS_)
[grammar] ~34-~34:
Context: ...Redis-backed job queue with lease-based concurrency) - thirdweb-core/
- Thirdweb service integrations (Vault SD...
(QB_NEW_EN_OTHER_ERROR_IDS_)
[grammar] ~35-~35: Use correct spacing
Context: ...irdweb service integrations (Vault SDK, IAW) ### Key Technologies - Axum for HTTP se...
(QB_NEW_EN_OTHER_ERROR_IDS_5)
[grammar] ~42-~42: Use correct spacing
Context: ... runtime - Vault SDK for secure key management ## Configuration System Hierarchical conf...
(QB_NEW_EN_OTHER_ERROR_IDS_5)
[grammar] ~44-~44: Use correct spacing
Context: ...secure key management ## Configuration System Hierarchical configuration priority: 1....
(QB_NEW_EN_OTHER_ERROR_IDS_5)
[grammar] ~49-~49: Use correct spacing
Context: ... server_production.yaml
) 3. Base YAML (server_base.yaml
) Configuration files located in `server/...
(QB_NEW_EN_OTHER_ERROR_IDS_5)
[grammar] ~51-~51: Use correct spacing
Context: ...ase.yaml) Configuration files located in
server/configuration/` ## Transaction Types Supported - **EOA t...
(QB_NEW_EN_OTHER_ERROR_IDS_5)
[grammar] ~53-~53: Use correct spacing
Context: ...r/configuration/` ## Transaction Types Supported - EOA transactions - Traditional wallet...
(QB_NEW_EN_OTHER_ERROR_IDS_5)
[grammar] ~55-~55: There might be a problem here.
Context: ...## Transaction Types Supported - EOA transactions - Traditional wallet transactions - **Account Abstrac...
(QB_NEW_EN_MERGED_MATCH)
[grammar] ~56-~56: There might be a problem here.
Context: ...itional wallet transactions - Account Abstraction - ERC-4337 smart accounts with gas sponsorship - **EIP-7...
(QB_NEW_EN_MERGED_MATCH)
[grammar] ~57-~57: There might be a problem here.
Context: ...smart accounts with gas sponsorship - EIP-7702 - Delegated transaction execution ## Key Developme...
(QB_NEW_EN_MERGED_MATCH)
[grammar] ~57-~57: Use correct spacing
Context: ... - EIP-7702 - Delegated transaction execution ## Key Development Areas ### API Routes L...
(QB_NEW_EN_OTHER_ERROR_IDS_5)
[grammar] ~59-~59: Use correct spacing
Context: ...ansaction execution ## Key Development Areas ### API Routes Located in `server/src/http/...
(QB_NEW_EN_OTHER_ERROR_IDS_5)
[grammar] ~62-~62: There might be a mistake here.
Context: ...opment Areas ### API Routes Located in server/src/http/routes/
- follows RESTful patterns wit...
(QB_NEW_EN_OTHER)
[grammar] ~62-~62: Use hyphens correctly
Context: ...s Located in server/src/http/routes/
- follows RESTful patterns with OpenAPI documenta...
(QB_NEW_EN_OTHER_ERROR_IDS_29)
[grammar] ~62-~62: There might be a mistake here.
Context: ...- follows RESTful patterns with OpenAPI documentation ### Background Jobs Implemented in `executo...
(QB_NEW_EN_OTHER)
[grammar] ~65-~65: There might be a problem here.
Context: ...ion ### Background Jobs Implemented in executors/src/
- uses TWMQ for reliable job processing with R...
(QB_NEW_EN_MERGED_MATCH)
[grammar] ~65-~65: There might be a mistake here.
Context: ... for reliable job processing with Redis persistence ### Blockchain Core core/src/
contains ch...
(QB_NEW_EN_OTHER)
[grammar] ~67-~67: Use correct spacing
Context: ... with Redis persistence ### Blockchain Core core/src/
contains chain management, transaction building,...
(QB_NEW_EN_OTHER_ERROR_IDS_5)
[grammar] ~68-~68: There might be a mistake here.
Context: ...transaction building, and UserOperation support ### Account Abstraction aa-core/src/
impl...
(QB_NEW_EN_OTHER)
[grammar] ~70-~70: Use correct spacing
Context: ... and UserOperation support ### Account Abstraction aa-core/src/
implements complete ERC-4337 flow including bundle...
(QB_NEW_EN_OTHER_ERROR_IDS_5)
[grammar] ~71-~71: There might be a mistake here.
Context: ...core/src/` implements complete ERC-4337 flow including bundler integration ## Error...
(QB_NEW_EN_OTHER)
[grammar] ~71-~71: There might be a mistake here.
Context: ...omplete ERC-4337 flow including bundler integration ## Error Handling Uses comprehensive erro...
(QB_NEW_EN_OTHER)
[grammar] ~73-~73: Use correct spacing
Context: ...including bundler integration ## Error Handling Uses comprehensive error types with con...
(QB_NEW_EN_OTHER_ERROR_IDS_5)
[grammar] ~75-~75: Use correct spacing
Context: ... are structured and logged with tracing spans. ## Testing Integration tests require Redi...
(QB_NEW_EN_OTHER_ERROR_IDS_5)
[grammar] ~77-~77: Use proper spacing conventions.
Context: ...d logged with tracing spans. ## Testing Integration tests require Redis. Tests c...
(QB_NEW_EN_OTHER_ERROR_IDS_000007)
[grammar] ~79-~79: Use correct spacing
Context: ...erations, transaction building, and API endpoints. ## Production Features - Horizontal scali...
(QB_NEW_EN_OTHER_ERROR_IDS_5)
[grammar] ~81-~81: Use correct spacing
Context: ...ding, and API endpoints. ## Production Features - Horizontal scaling via shared Redis bac...
(QB_NEW_EN_OTHER_ERROR_IDS_5)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Socket Security: Pull Request Alerts
🔇 Additional comments (10)
executors/src/kafka_integration.rs (3)
6-11
: Well-designed trait abstraction for event sending.The
TransactionEventSender
trait provides a clean abstraction for sending transaction lifecycle events. The async methods and the decision to not returnResult
types is appropriate - event sending should be fire-and-forget to avoid blocking the main transaction flow.
13-37
: Event structs are well-structured with appropriate serialization.The event structs include all necessary fields for transaction lifecycle tracking. The use of
camelCase
serialization and proper type choices (Address, Bytes, U256) align well with JSON APIs and Ethereum data types.
40-51
: No-op implementation provides good fallback behavior.The
NoOpEventSender
implementation is correctly implemented and provides a clean way to disable event sending when needed.server/src/kafka/mod.rs (7)
108-139
: Kafka producer configuration is well-structured.The configuration setup properly handles SASL_SSL authentication and combines hardcoded production-ready settings with configurable parameters. The error handling for producer creation is appropriate.
141-154
: Disabled producer pattern is well-implemented.The disabled producer pattern allows for graceful fallback when Kafka is not configured, which is essential for development and testing scenarios.
235-279
: Message sending implementation has excellent error handling.The message sending logic properly handles serialization errors, producer availability, and send failures with detailed structured logging. The timeout configuration is appropriate.
281-291
: Flush method implementation is correct.The flush method properly handles both enabled and disabled states and uses the configured timeout. This is essential for graceful shutdown.
383-408
: TransactionEventSender implementation correctly adapts between trait and internal types.The implementation properly converts between the trait's event types and the internal Kafka message types, maintaining the separation of concerns between the executor abstraction and the Kafka implementation.
411-426
: Factory functions provide clean initialization patterns.The factory functions handle both enabled and disabled Kafka scenarios cleanly, with appropriate logging for operational visibility.
428-435
: Event sender factory correctly returns appropriate implementation.The factory correctly returns either a Kafka-backed sender or a no-op implementation based on the producer state, maintaining the abstraction while providing the right behavior.
impl<T> KafkaEventEnvelope<T> { | ||
pub fn new(team_id: String, project_id: String, data: T) -> Self { | ||
Self { | ||
id: format!("evt_{}", cuid::cuid1().expect("Failed to generate CUID")), | ||
team_id, | ||
project_id, | ||
created_at: chrono::Utc::now().to_rfc3339(), | ||
data, | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Handle potential CUID generation failure.
The CUID generation uses expect()
which will panic if it fails. Consider using proper error handling instead.
pub fn new(team_id: String, project_id: String, data: T) -> Self {
Self {
- id: format!("evt_{}", cuid::cuid1().expect("Failed to generate CUID")),
+ id: format!("evt_{}", cuid::cuid1().unwrap_or_else(|_| {
+ tracing::warn!("Failed to generate CUID, using timestamp fallback");
+ chrono::Utc::now().timestamp_nanos_opt().unwrap_or(0).to_string()
+ })),
team_id,
project_id,
created_at: chrono::Utc::now().to_rfc3339(),
data,
}
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
impl<T> KafkaEventEnvelope<T> { | |
pub fn new(team_id: String, project_id: String, data: T) -> Self { | |
Self { | |
id: format!("evt_{}", cuid::cuid1().expect("Failed to generate CUID")), | |
team_id, | |
project_id, | |
created_at: chrono::Utc::now().to_rfc3339(), | |
data, | |
} | |
} | |
impl<T> KafkaEventEnvelope<T> { | |
pub fn new(team_id: String, project_id: String, data: T) -> Self { | |
Self { | |
id: format!("evt_{}", cuid::cuid1().unwrap_or_else(|_| { | |
tracing::warn!("Failed to generate CUID, using timestamp fallback"); | |
chrono::Utc::now().timestamp_nanos_opt().unwrap_or(0).to_string() | |
})), | |
team_id, | |
project_id, | |
created_at: chrono::Utc::now().to_rfc3339(), | |
data, | |
} | |
} | |
} |
🤖 Prompt for AI Agents
In server/src/kafka/mod.rs around lines 25 to 34, the CUID generation uses
expect() which causes a panic on failure. Refactor the new() function to return
a Result type instead of panicking. Handle the potential error from
cuid::cuid1() gracefully by propagating it or converting it into a suitable
error variant, ensuring the function caller can handle the failure without the
program panicking.
…essage_bus_for_sending_webhook_events
TL;DR
Added Kafka integration for transaction event streaming with a configurable producer implementation.
What changed?
kafka
module with producer implementation and event typesTransactionEventSender
trait for sending transaction eventsCLAUDE.md
file with repository documentation and development guidanceHow to test?
engine.transaction.sent
engine.transaction.confirmed
Why make this change?
This change enables real-time transaction event streaming to external systems via Kafka, allowing other services to react to transaction lifecycle events without polling. This supports event-driven architectures and enables better integration with monitoring, analytics, and business process systems that need to be notified when transactions are sent and confirmed.
Summary by CodeRabbit
New Features
Documentation
Chores