You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Benchmarking using @dalf's pyhttp-benchmark tool led us to see that increasing this number to 64kB could lead 2-3x execution time improvement for large responses (typically > 256kB).
My rationale would be that reading N bytes in one go via a syscall is faster than reading n = N/k bytes k times — mostly because the kernel is way faster than Python.
The text was updated successfully, but these errors were encountered:
Running the 1MB-response benchmark for various READ_NUM_BYTES values, and a matplotlib script later, here's a handy little plot to support this assessment that 64kB is probably a good value…
Looks like the median execution time gets exponentially smaller as we increase the chunk size. Once we hit 32kB the marginal improvement starts hitting its limits. 64kB gets us "below 1s in wall time for a 1MB response" and that sounds very satisfying. :-)
Coming from discussion on Gitter with @dalf…
Currently we are reading response data in chunks of 4kB…
httpcore/httpcore/_async/http11.py
Line 26 in f4240b6
httpcore/httpcore/_async/http2.py
Line 29 in f4240b6
Benchmarking using @dalf's pyhttp-benchmark tool led us to see that increasing this number to 64kB could lead 2-3x execution time improvement for large responses (typically > 256kB).
My rationale would be that reading N bytes in one go via a syscall is faster than reading n = N/k bytes k times — mostly because the kernel is way faster than Python.
The text was updated successfully, but these errors were encountered: