Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Salvage mplex in the age of resource management #99

Merged
merged 2 commits into from
Feb 28, 2022
Merged

Conversation

vyzo
Copy link
Contributor

@vyzo vyzo commented Feb 26, 2022

Instead of blindly pushing packets till we run out of memory and die because the resource manager throttled us, we limit in-flight in and out buffers.
As a side effect, we no longer crash and burn when the reader is slow, we just block.

multiplex.go Outdated Show resolved Hide resolved
multiplex.go Outdated

return mp.getBuffer(length)
}

func (mp *Multiplex) getBuffer(length int) ([]byte, error) {
if err := mp.memoryManager.ReserveMemory(length, 128); err != nil {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we reserve all the memory mplex could possibly consume up front, and get rid of this call?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we could, but thats 16M a pop!!!
16 buffers x 1M max message size.

But you are right, we should.
I guess we can rsvp incrementally to find how much we can wiggle, up to 16 bufs (maybe less?)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok done; I also cut it to 8 buffers total (4 each direction) so that it's only up to 8M for the reservation.

- memory is reserved up front
- number of buffers in flight is limited
- we remove that insane behaviour of resetting conns with slow readers; now we just block.
@vyzo
Copy link
Contributor Author

vyzo commented Feb 28, 2022

squashed some commits.

@marten-seemann marten-seemann merged commit a03888c into master Feb 28, 2022
@Stebalien
Copy link
Member

Why are we reserving memory up-front instead of on-demand? That and assuming that every message is at the "max message size" will basically make mplex useless.

@Stebalien
Copy link
Member

As a side effect, we no longer crash and burn when the reader is slow, we just block.

We didn't crash and burn, we killed the stream so the entire system didn't deadlock. Please revert that!

@marten-seemann
Copy link
Contributor

We didn't crash and burn, we killed the stream so the entire system didn't deadlock.

The fact that mplex was randomly resetting streams is what made mplex unusable, as every transfer larger than a few MB would run into this limit.

@Stebalien
Copy link
Member

The fact that mplex was randomly resetting streams is what made mplex unusable, as every transfer larger than a few MB would run into this limit.

  1. Only if the reader didn't read from the stream fast enough.
  2. I should have read the code, it still does this: https://github.com/libp2p/go-mplex/pull/99/files#diff-1b51c99a200935510d1b391b27ba28918fa265dffdeaab00d4a9b393f0e3e520R449

So I'm now very confused.

@marten-seemann
Copy link
Contributor

Sorry for the confusion, we didn't reset the stream, we killed the connection when a memory allocation failed:
https://github.com/libp2p/go-mplex/blob/v0.4.0/multiplex.go#L570-L578

@vyzo
Copy link
Contributor Author

vyzo commented Feb 28, 2022

the reservation must be for max message size because we really send that long messages (and must be prepared to receive).

we could theoretically try to precommit memory and track actual usage every time we get a buffer, but it gets complicated and the only reasonable way to deal with the condition and the timeouts/cancellations is to spawn a goroutine every time we block.

@vyzo
Copy link
Contributor Author

vyzo commented Feb 28, 2022

note that mplex was effectively unusable with dynamic memory reervations coz we dont know who the buffer belongs to and thus must kill the conn when it fails.

@Stebalien
Copy link
Member

Sorry for the confusion, we didn't reset the stream, we killed the connection when a memory allocation failed:

Ah, I thought you were referring to what mplex did before. That makes more sense.

the reservation must be for max message size because we really send that long messages (and must be prepared to receive).

I'm still confused. Why can't we call ReserveMemory right before we allocate a buffer instead of keeping it around?

note that mplex was effectively unusable with dynamic memory reervations coz we dont know who the buffer belongs to and thus must kill the conn when it fails.

This cannot be correct. Like, literally cannot.

  1. We know exactly what we know in yamux.
  2. We know how big the message is before we try to read it.
  3. We know how big the write buffer is before we try to send.

@vyzo
Copy link
Contributor Author

vyzo commented Feb 28, 2022

I'm still confused. Why can't we call ReserveMemory right before we allocate a buffer instead of keeping it around?

Because it can fail and we have to kill the conn. If your concern is that we are not very effectively using the reserved memory, then we can do more detailed memory accounting at the cost of a goroutine for every blocked allocation. But that just pushes the problem a bit further, it doesn't solve it.

We know how big the message is before we try to read it

yes, and if we can't get memory for it we must die because we don't know what it is.
I think it is strictly better to block until a buffer is available instead of failing to get one.

@vyzo vyzo deleted the fix/salvage branch February 28, 2022 14:35
@Stebalien
Copy link
Member

Because it can fail and we have to kill the conn.

Can we not block?

If your concern is that we are not very effectively using the reserved memory, then we can do more detailed memory accounting at the cost of a goroutine for every blocked allocation

Why? I assume we'd just block the receive until we have some memory.

That's partially my concern. My main concern is that mplex is going to sit on massive memory allocations without using them.

yes, and if we can't get memory for it we must die because we don't know what it is.
I think it is strictly better to block until a buffer is available instead of failing to get one.

Yes, then why aren't we doing this?

@vyzo
Copy link
Contributor Author

vyzo commented Feb 28, 2022

Because it can fail and we have to kill the conn.

Can we not block?

how?

If your concern is that we are not very effectively using the reserved memory, then we can do more detailed memory accounting at the cost of a goroutine for every blocked allocation

Why? I assume we'd just block the receive until we have some memory.

That's partially my concern. My main concern is that mplex is going to sit on massive memory allocations without using them.

If that's your concern, yes, I can fix that; it's just more complexity for marginal benefit imo.

yes, and if we can't get memory for it we must die because we don't know what it is.
I think it is strictly better to block until a buffer is available instead of failing to get one.

Yes, then why aren't we doing this?

That's what we do, we block until we can get a buffer!

@Stebalien
Copy link
Member

Can we not block?

how?

It looks like we need a way to block on resource allocation. This isn't the first time, and won't be the last.

If that's your concern, yes, I can fix that; it's just more complexity for marginal benefit imo.

Reserving multiple megabytes per connection seems like a pretty big problem, right?

That's what we do, we block until we can get a buffer!

No, we're reserving multiple megabytes of memory up-front. That's my concern.

@BigLep
Copy link

BigLep commented Mar 1, 2022

@vyzo : is there any followup work that needs to happen? I'm asking because saw comments/discussion after this was merged.

@BigLep BigLep mentioned this pull request Mar 1, 2022
69 tasks
@vyzo
Copy link
Contributor Author

vyzo commented Mar 2, 2022

yes, we decided to drastically reduce the memory precommit.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants