diff --git a/content/features/proxy/_index.md b/content/features/proxy/_index.md index 4cfd925..91a4c0a 100644 --- a/content/features/proxy/_index.md +++ b/content/features/proxy/_index.md @@ -733,7 +733,7 @@ function generator(rctx) return function(r) -- could break, since we are now directly referencing the global -- table, which can change. many times this won't matter, but a best - -- practice is to always pass referenecs down when needed. + -- practice is to always pass references down when needed. local foo = lookup[input] end end @@ -790,7 +790,7 @@ mcp.backend_flap_backoff_ramp(seconds) -- reasonable timeframe. mcp.backend_flap_backoff_max(seconds) --- Whether or not a backend is hanndled by worker threads or a dedicated IO +-- Whether or not a backend is handled by worker threads or a dedicated IO -- thread, by default. -- disabled by default, which provides better scalability at the cost of more -- TCP connections and less batching of backend syscalls. @@ -901,7 +901,7 @@ a good number of language and performance improvements on its own. ## Why not use a mesh router? Memcached's proxy is not intended to replace a mesh router; its scope is much -smaller and more performance focused. A mesh router may be highly confgurable, +smaller and more performance focused. A mesh router may be highly configurable, with broad support, but will be very slow. Caching services (and in this case a caching proxy) can be used to restore performance to a service migrated to a mesh router; for cost or practicality reasons. diff --git a/content/features/proxy/arch.md b/content/features/proxy/arch.md index 696eb42..356431f 100644 --- a/content/features/proxy/arch.md +++ b/content/features/proxy/arch.md @@ -175,7 +175,7 @@ Here we will: - If there is another response ready in the buffer, immediately process it. - Once complete, go back to waiting. -If more requests arrive while Backend B is waiting for respones, it will +If more requests arrive while Backend B is waiting for responses, it will immediately write() them to the same socket. If the socket buffer is full, it will wait until it can write more. Thus new requests are not delayed while waiting for a previous batch to complete. @@ -185,7 +185,7 @@ off of the socket, so there is no internal delay for waiting on a batch to process. If Backend B breaks for some reason, the queue is immediately drained and -error responses are sent to all waiting Client's. +error responses are sent to all waiting Clients. TODO: chart.