Description
A note for the community
- Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
- If you are interested in working on this issue or have submitted a pull request, please leave a comment
Problem
When using Vector's socket (or syslog) source with TLS enabled, ingest performance from a high-throughput client (e.g., ~80K EPS from a Palo Alto firewall) drops significantly when the total number of TLS connections exceeds the number of CPU cores — even when the additional connections are completely idle (send no data).
This appears to be a scalability issue in how TLS connections are handled and scheduled in the async runtime (Tokio), leading to read starvation on active connections and increasing socket backlogs (RECEIVE-Q).
Configuration
Version
0.47
Debug Output
Example Data
No response
Additional Context
How to Reproduce
Use a system with N CPU cores (e.g., 32).
Configure Vector with a socket source using tls enabled.
Send logs from a single high-throughput source (~80,000 EPS).
Confirm ingest rate is stable and high (e.g., ~75–80K EPS).
Add idle TLS connections that do not send data (simulate with openssl s_client)
Observe:
EPS from high-throughput source drops sharply when idle connections exceed core count.
RECEIVE-Q (via ss -antp) on the high-throughput socket grows and is not drained.
References
No response