Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multi-mercury server #12337

Conversation

samsondav
Copy link
Collaborator

No description provided.

Copy link
Contributor

github-actions bot commented Mar 7, 2024

I see that you haven't updated any CHANGELOG files. Would it make sense to do so?

@samsondav samsondav force-pushed the MERC-4006-implement-core-node-sending-to-multiple-wsrpc-instances branch from 9fcfd85 to cddf1a6 Compare March 7, 2024 17:56
@samsondav samsondav force-pushed the MERC-4006-implement-core-node-sending-to-multiple-wsrpc-instances branch from cddf1a6 to 48e620c Compare March 8, 2024 17:14
@samsondav samsondav force-pushed the MERC-4006-implement-core-node-sending-to-multiple-wsrpc-instances branch from cd3c53c to e19b233 Compare March 11, 2024 15:54
@samsondav samsondav marked this pull request as ready for review March 11, 2024 17:55
@samsondav samsondav requested a review from a team as a code owner March 11, 2024 17:55
@samsondav samsondav force-pushed the MERC-4006-implement-core-node-sending-to-multiple-wsrpc-instances branch from e19b233 to eb26b48 Compare March 11, 2024 17:56
@samsondav samsondav force-pushed the MERC-4006-implement-core-node-sending-to-multiple-wsrpc-instances branch from eb26b48 to 67eab55 Compare March 11, 2024 17:57
@samsondav samsondav force-pushed the MERC-4006-implement-core-node-sending-to-multiple-wsrpc-instances branch from c12a57e to b9c92ba Compare March 12, 2024 12:46
@samsondav samsondav force-pushed the MERC-4006-implement-core-node-sending-to-multiple-wsrpc-instances branch from b9c92ba to 6811852 Compare March 12, 2024 12:47
@samsondav samsondav requested a review from jmank88 March 12, 2024 14:08
return
}

func ValidatePluginConfig(config PluginConfig, feedID mercuryutils.FeedID) (merr error) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we protect against the same Server URL being specified multiple times? I don't think there are any technical sharp edges with allowing it (Checkout from the mercury pool will make sure a wsrpc client only has one connection to the mercury server IFIUC)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It unmarshals into a map so duplicates would be handled however the TOML library usually handles that. Either the dupe would be ignored, or it would throw an error. Either way its impossible to have dupes after unmarshalling.

patrickhuie19
patrickhuie19 previously approved these changes Mar 13, 2024
Copy link
Contributor

@patrickhuie19 patrickhuie19 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One non-blocking question; LGTM

@samsondav samsondav added this pull request to the merge queue Mar 13, 2024
@github-merge-queue github-merge-queue bot removed this pull request from the merge queue due to failed status checks Mar 13, 2024
@samsondav samsondav added this pull request to the merge queue Mar 14, 2024
@github-merge-queue github-merge-queue bot removed this pull request from the merge queue due to failed status checks Mar 14, 2024
@@ -23,7 +23,7 @@ func bootstrapPersistenceManager(t *testing.T, jobID int32, db *sqlx.DB) (*Persi
t.Helper()
lggr, observedLogs := logger.TestLoggerObserved(t, zapcore.DebugLevel)
orm := NewORM(db, lggr, pgtest.NewQConfig(true))
return NewPersistenceManager(lggr, orm, jobID, 2, 5*time.Millisecond, 5*time.Millisecond), observedLogs
return NewPersistenceManager(lggr, "mercuryserver.example", orm, jobID, 2, 5*time.Millisecond, 5*time.Millisecond), observedLogs
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is mercuryserver.example a valid wss url? if not, why does this work? i some validation code early in the PR to ensure wss urls. where does that come and where is it tested?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In DB and internally the scheme is always stripped. You can provide URLs in the TOML in either form:

  • wss://example.com
  • example.com

Are both valid.

@samsondav samsondav added this pull request to the merge queue Mar 14, 2024
@@ -37,10 +38,11 @@ type PersistenceManager struct {
pruneFrequency time.Duration
}

func NewPersistenceManager(lggr logger.Logger, orm ORM, jobID int32, maxTransmitQueueSize int, flushDeletesFrequency, pruneFrequency time.Duration) *PersistenceManager {
func NewPersistenceManager(lggr logger.Logger, serverURL string, orm ORM, jobID int32, maxTransmitQueueSize int, flushDeletesFrequency, pruneFrequency time.Duration) *PersistenceManager {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

are persistence manager completely independent? if a mercury reporting plugin writes to two servers and one of those refuses some data or otherwise fails, does the reporting plugin care? asked differently are there any implicit or explicit assumptions when reading from the persistence manager that the target servers are in sync or otherwise have coupled contraints?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes they are independent, each PersistenceManager is scoped to a unique URL

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if a mercury reporting plugin writes to two servers and one of those refuses some data or otherwise fails, does the reporting plugin care?

Nope, reporting plugin doesn't know or care.

are there any implicit or explicit assumptions when reading from the persistence manager that the target servers are in sync or otherwise have coupled contraints

Nope the queues operate completely independently, by design.

}
}

func (tq *TransmitQueue) Init(transmissions []*Transmission) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should this be wrapped in a do once?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, no I don't think so. It's up to the caller. Presumably they could Init twice if they wanted.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i only see this called in tests. Is it missing from somewhere else? should there be a guard somewhere that fails cleanly if init hasn't been called? right now i'm concerned there is a nil pointer panic lurking when someone forgets to init

Merged via the queue into develop with commit 195b504 Mar 14, 2024
97 checks passed
@samsondav samsondav deleted the MERC-4006-implement-core-node-sending-to-multiple-wsrpc-instances branch March 14, 2024 14:38
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants