Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

scylla: lazily close excess connections once equilibrium is reached. #14

Merged
merged 2 commits into from
Mar 20, 2019

Conversation

dahankzter
Copy link

Keep a list of excess connections and only close these when we have connections to all the shards.

Copy link
Member

@avikivity avikivity left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Commit message and code should explain why. Otherwise looks ok.

@@ -80,12 +80,14 @@ func isScyllaConn(conn *Conn) bool {

// scyllaConnPicker is a specialised ConnPicker that selects connections based
// on token trying to get connection to a shard containing the given token.
// A list of excess connections is maintained to allow for lazy garbage collection
type scyllaConnPicker struct {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would be nice to explain why

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes it's a bit sparse, I will amend the doc and commit message.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated both in the code and the commit message.

scylla.go Outdated Show resolved Hide resolved
@@ -189,12 +194,28 @@ func (p *scyllaConnPicker) Put(conn *Conn) {
copy(p.conns, conns)
}
if c := p.conns[s.shard]; c != nil {
conn.Close()
p.excessConns = append(p.excessConns, conn)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we put a limit to how big p.excessConns could be, so that it does not grow unbounded if something goes wrong? Not sure how many connections we need to hit all shards, so I guess if we reach 10*p.nrShards something is off and maybe we should close the connections anyway.

Copy link
Author

@dahankzter dahankzter Mar 20, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure it helps. It can even make it worse. The idea is that the chance to get a new shard increases significantly with the number of connections so equilibrium should be reached faster.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You could have multiple clients connecting to the same host though, which would cause it to race, so theoretically you could keep opening connections forever (even if the chance drops with time), so I think it's still good to have some kind of limit.

@dahankzter dahankzter force-pushed the lazy_purge_excess_conns branch 4 times, most recently from ddb8628 to c917adb Compare March 20, 2019 11:57
Henrik Johansson added 2 commits March 20, 2019 13:01
A list of excess connections is maintained to allow for lazy removal of.
excess connections. Keeping excess connections open helps reaching equilibrium
faster since the likelihood of hitting the same shard decreases with the number
of connections to the shard. The excess connections list is currently
capped to 10 times the number of shards. This magic number has no
backing statistics but sounds reasonable.
When the pool is not yet fully materialized we give the caller
a random connection to allow the session to start working.
@dahankzter dahankzter merged commit 39a56ac into master Mar 20, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants