Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Need a way to limit threads per remote #88

Open
ztane opened this issue Mar 26, 2015 · 5 comments
Open

Need a way to limit threads per remote #88

ztane opened this issue Mar 26, 2015 · 5 comments

Comments

@ztane
Copy link

ztane commented Mar 26, 2015

We have an OLAP application on Pyramid + Waitress; and we are using AJAX to fetch the results. Sometimes the computations are not that swift (especially if PostgreSQL makes a bad query plan); the problem is that 1 client doing the lengthy ops, or reloading the page could easily exhaust all the worker threads. There ought to be way to configure a policy whether a request is dispatched on yet another thread, or queued until the previous requests from that remote address are fulfilled.

@digitalresistor
Copy link
Member

I don't think that waitress is the right place to do this. I would highly recommend using some sort of reverse proxy in front of waitress that is better design to deal with such scenarios.

@ztane
Copy link
Author

ztane commented Mar 27, 2015

The reverse proxy would not be in Python, thus much harder to implement.

@digitalresistor
Copy link
Member

http://opensource.adnovum.ch/mod_qos/ implements this, so it's a simple config flag for Apache.

@ztane
Copy link
Author

ztane commented Mar 28, 2015

Waitress specifically works like this: "When a channel determines the client has sent at least one full valid HTTP request, it schedules a “task” with a “thread dispatcher”. The thread dispatcher maintains a fixed pool of worker threads available to do client work (by default, 4 threads). If a worker thread is available when a task is scheduled, the worker thread runs the task. The task has access to the channel, and can write back to the channel’s output buffer. When all worker threads are in use, scheduled tasks will wait in a queue for a worker thread to become available." Thus it would be just a matter of being allowed to configure a queueing policy where the supplied default works like now.

@ztane
Copy link
Author

ztane commented Mar 28, 2015

Also this is not possible to do on the front side nor would it be comparable - Apache cannot differentiate between long requests over slow links and requests that block the worker.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants