Skip to content

Commit

Permalink
Merge pull request #2 from toru/proxy
Browse files Browse the repository at this point in the history
Address minor typos in the proxy feature documentation
  • Loading branch information
dormando committed Sep 17, 2024
2 parents 1abc652 + 6ee6c35 commit 4695edb
Show file tree
Hide file tree
Showing 2 changed files with 5 additions and 5 deletions.
6 changes: 3 additions & 3 deletions content/features/proxy/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -733,7 +733,7 @@ function generator(rctx)
return function(r)
-- could break, since we are now directly referencing the global
-- table, which can change. many times this won't matter, but a best
-- practice is to always pass referenecs down when needed.
-- practice is to always pass references down when needed.
local foo = lookup[input]
end
end
Expand Down Expand Up @@ -790,7 +790,7 @@ mcp.backend_flap_backoff_ramp(seconds)
-- reasonable timeframe.
mcp.backend_flap_backoff_max(seconds)

-- Whether or not a backend is hanndled by worker threads or a dedicated IO
-- Whether or not a backend is handled by worker threads or a dedicated IO
-- thread, by default.
-- disabled by default, which provides better scalability at the cost of more
-- TCP connections and less batching of backend syscalls.
Expand Down Expand Up @@ -901,7 +901,7 @@ a good number of language and performance improvements on its own.
## Why not use a mesh router?

Memcached's proxy is not intended to replace a mesh router; its scope is much
smaller and more performance focused. A mesh router may be highly confgurable,
smaller and more performance focused. A mesh router may be highly configurable,
with broad support, but will be very slow. Caching services (and in this case
a caching proxy) can be used to restore performance to a service migrated to a
mesh router; for cost or practicality reasons.
Expand Down
4 changes: 2 additions & 2 deletions content/features/proxy/arch.md
Original file line number Diff line number Diff line change
Expand Up @@ -175,7 +175,7 @@ Here we will:
- If there is another response ready in the buffer, immediately process it.
- Once complete, go back to waiting.

If more requests arrive while Backend B is waiting for respones, it will
If more requests arrive while Backend B is waiting for responses, it will
immediately write() them to the same socket. If the socket buffer is full, it
will wait until it can write more. Thus new requests are not delayed while
waiting for a previous batch to complete.
Expand All @@ -185,7 +185,7 @@ off of the socket, so there is no internal delay for waiting on a batch to
process.

If Backend B breaks for some reason, the queue is immediately drained and
error responses are sent to all waiting Client's.
error responses are sent to all waiting Clients.

TODO: chart.

Expand Down

0 comments on commit 4695edb

Please sign in to comment.