Home | Project | Module | Source
Module: asyncpal.__init__
Class: ProcessPool
Inheritance: asyncpal.pool.Pool
The ProcessPool class for parallelism.
Here are fields exposed in the class:
Field | Value |
---|---|
_abc_impl | <_abc_data object at 0x7f7882637570> |
Here are properties exposed in the class:
Property | Methods | Description |
---|---|---|
cancelled_tasks | getter | No docstring. |
final_args | getter | No docstring. |
final_kwargs | getter | No docstring. |
finalizer | getter | No docstring. |
idle_timeout | getter, setter | No docstring. |
init_args | getter | No docstring. |
init_kwargs | getter | No docstring. |
initializer | getter | No docstring. |
is_broken | getter | No docstring. |
is_closed | getter | No docstring. |
is_terminated | getter | No docstring. |
max_tasks_per_worker | getter | No docstring. |
max_workers | getter | No docstring. |
mp_context | getter | No docstring. |
name | getter | No docstring. |
worker_type | getter | No docstring. |
workers | getter | No docstring. |
Here are methods exposed in the class:
- __init__
- check
- count_busy_workers
- count_free_workers
- count_pending_tasks
- count_workers
- join
- map
- map_all
- map_all_unordered
- map_unordered
- run
- shutdown
- spawn_max_workers
- spawn_workers
- starmap
- starmap_all
- starmap_all_unordered
- starmap_unordered
- submit
- test
- _cancel_tasks
- _cleanup_stored_futures
- _cleanup_task_queue
- _consume_message_queue
- _count_busy_workers
- _count_free_workers
- _count_pending_tasks
- _count_workers
- _create_lock
- _create_worker
- _drain_mp_task_queue
- _drain_task_queue
- _ensure_pool_integrity
- _filter_tasks
- _join_inactive_workers
- _join_workers
- _map_eager
- _map_eager_chunked
- _map_lazy
- _map_lazy_chunked
- _map_lazy_chunked_unordered
- _map_lazy_unordered
- _notify_workers_to_shutdown
- _on_worker_exception
- _on_worker_shutdown
- _setup
- _shutdown_filter_thread
- _shutdown_message_thread
- _spawn_filter_thread
- _spawn_message_thread
- _spawn_workers
- _submit_task
- _update_future
Initialization.
def __init__(self, max_workers=None, *, name='ProcessPool', idle_timeout=60, initializer=None, init_args=None, init_kwargs=None, finalizer=None, final_args=None, final_kwargs=None, max_tasks_per_worker=None, mp_context=None):
...
Parameter | Description |
---|---|
max_workers | the maximum number of workers. Defaults to CPU count. On Windows, the maximum number that is accepted for the max_workers is 60. |
name | the name of the pool. Defaults to the class name |
idle_timeout | None or a timeout value in seconds. The idle timeout tells how much time an inactive worker can sleep before it closes. This helps the pool to shrink when there isn't much of tasks. If you set None, the pool will never shrink. each worker before it closes |
initializer | a function that will get called at the start of each worker |
init_args | arguments (list) to pass to the initializer |
init_kwargs | keyword arguments (dict) to pass to the initializer |
finalizer | a function that will get called when the worker is going to close |
final_args | arguments (list) to pass to the finalizer |
final_kwargs | keyword arguments (dict) to pass to the finalizer |
max_tasks_per_worker | Maximum number of tasks a worker is allowed to do before it closes. |
mp_context | the multiprocessing context. Defaults to multiprocessing.get_context("spawn") |
Check the pool
def check(self):
...
The table below outlines exceptions that may occur.
Exception | Circumstance |
---|---|
RuntimeError | raised if the pool is closed |
BrokenPoolError | raised if the pool is broken |
Returns the number of busy workers
def count_busy_workers(self):
...
Returns the number of free workers
def count_free_workers(self):
...
Returns the number of pending tasks
def count_pending_tasks(self):
...
Returns the number of workers that are alive
def count_workers(self):
...
Join the workers, i.e., wait for workers to end their works, then close them
def join(self, timeout=None):
...
Parameter | Description |
---|---|
timeout | None or a number representing seconds. |
The table below outlines exceptions that may occur.
Exception | Circumstance |
---|---|
RuntimeError | raised when timeout expires |
Perform a Map operation lazily and return an iterator that iterates over the results. Beware, a remote exception will be reraised here
def map(self, target, *iterables, chunk_size=1, buffer_size=1, keep_order=True, timeout=None):
...
Parameter | Description |
---|---|
target | callable |
iterables | iterables to pass to the target |
chunk_size | max length for a chunk |
buffer_size | the buffer_size. A bigger size will consume more memory but the overall operation will be faster |
keep_order | whether the original order should be kept or not |
timeout | None or a timeout (int or float) value in seconds |
Returns an iterator
The table below outlines exceptions that may occur.
Exception | Circumstance |
---|---|
RuntimeError | raised when the pool is closed |
BrokenPoolError | raised when the pool is broken |
Exception | any remote exception |
Perform a Map operation eagerly and return an iterator
that iterates over the results.
Using this method instead of the map
method might cause high memory usage.
Beware, a remote exception will be reraised here
def map_all(self, target, *iterables, chunk_size=1, keep_order=True, timeout=None):
...
Parameter | Description |
---|---|
target | callable |
iterables | iterables to pass to the target |
chunk_size | max length for a chunk |
keep_order | whether the original order should be kept or not |
timeout | None or a timeout (int or float) value in seconds |
Returns an iterator that iterates over the results.
The table below outlines exceptions that may occur.
Exception | Circumstance |
---|---|
RuntimeError | raised when the pool is closed |
BrokenPoolError | raised when the pool is broken |
Exception | any remote exception |
Same as map with 'keep_order' set to False
def map_all_unordered(self, target, *iterables, chunk_size=1, timeout=None):
...
Same as map with 'keep_order' set to False
def map_unordered(self, target, *iterables, chunk_size=1, buffer_size=1, timeout=None):
...
Submit the task to the pool, and return the result (or re-raise the exception raised by the callable)
def run(self, target, /, *args, **kwargs):
...
Parameter | Description |
---|---|
target | callable |
args | args to pass to the callable |
kwargs | kwargs to pass to the callable |
The table below outlines exceptions that may occur.
Exception | Circumstance |
---|---|
RuntimeError | raised when the pool is closed |
BrokenPoolError | raised when the pool is broken |
Exception | exception that might be raised by the task itself |
Close the pool by joining workers and cancelling pending tasks. Note that cancelled tasks can be retrieved via the cancelled_tasks property.
def shutdown(self):
...
Returns False if the pool has already been closed, else returns True.
Spawn the maximum number of workers
def spawn_max_workers(self):
...
Spawn a specific number of workers or the right number of workers that is needed
def spawn_workers(self, n=None):
...
Parameter | Description |
---|---|
n | None or an integer. |
Returns the number of spawned workers
Perform a Starmap operation lazily and return an iterator that iterates over the results. Beware, a remote exception will be reraised here
def starmap(self, target, iterable, chunk_size=1, buffer_size=1, keep_order=True, timeout=None):
...
Parameter | Description |
---|---|
target | callable |
iterable | sequence of args to pass to the target |
chunk_size | max length for a chunk |
buffer_size | the buffer_size. A bigger size will consume more memory but the overall operation will be faster |
keep_order | whether the original order should be kept or not |
timeout | None or a timeout (int or float) value in seconds |
Returns an iterator that iterates over the results.
The table below outlines exceptions that may occur.
Exception | Circumstance |
---|---|
RuntimeError | raised when the pool is closed |
BrokenPoolError | raised when the pool is broken |
Exception | any remote exception |
Perform a Starmap operation eagerly and return an iterator
that iterates over the results.
Using this method instead of the map
method might cause high memory usage.
Beware, a remote exception will be reraised here
def starmap_all(self, target, iterable, chunk_size=1, keep_order=True, timeout=None):
...
Parameter | Description |
---|---|
target | callable |
iterable | sequence of args to pass to the target |
chunk_size | max length for a chunk |
keep_order | whether the original order should be kept or not |
timeout | None or a timeout (int or float) value in seconds |
Returns an iterator that iterates over the results.
The table below outlines exceptions that may occur.
Exception | Circumstance |
---|---|
RuntimeError | raised when the pool is closed |
BrokenPoolError | raised when the pool is broken |
Exception | any remote exception |
Same as starmap_all with 'keep_order' set to False
def starmap_all_unordered(self, target, iterable, chunk_size=1, timeout=None):
...
Same as starmap with 'keep_order' set to False
def starmap_unordered(self, target, iterable, chunk_size=1, buffer_size=1, timeout=None):
...
Submit the task to the pool, and return a future object
def submit(self, target, /, *args, **kwargs):
...
Parameter | Description |
---|---|
target | callable |
args | args to pass to the callable |
kwargs | kwargs to pass to the callable |
The table below outlines exceptions that may occur.
Exception | Circumstance |
---|---|
RuntimeError | raised when the pool is closed |
BrokenPoolError | raised when the pool is broken |
Exception | exception that might be raised by the task itself |
Test the pool by creating another pool with the same config and doing some computation on it to ensure that it won't break. This method might raise a BrokenPoolError exception
def test(self):
...
No docstring
def _cancel_tasks(self):
...
No docstring
def _cleanup_stored_futures(self):
...
No docstring
def _cleanup_task_queue(self):
...
No docstring
def _consume_message_queue(self):
...
No docstring
def _count_busy_workers(self):
...
No docstring
def _count_free_workers(self):
...
No docstring
def _count_pending_tasks(self):
...
No docstring
def _count_workers(self):
...
No docstring
def _create_lock(self):
...
No docstring
def _create_worker(self):
...
No docstring
def _drain_mp_task_queue(self):
...
No docstring
def _drain_task_queue(self):
...
No docstring
def _ensure_pool_integrity(self):
...
No docstring
def _filter_tasks(self):
...
No docstring
def _join_inactive_workers(self):
...
No docstring
def _join_workers(self, timeout=None):
...
No docstring
def _map_eager(self, target, iterable, keep_order, timeout):
...
No docstring
def _map_eager_chunked(self, target, iterable, chunk_size, keep_order, timeout):
...
No docstring
def _map_lazy(self, target, iterable, buffer_size, timeout):
...
No docstring
def _map_lazy_chunked(self, target, iterable, chunk_size, buffer_size, timeout):
...
No docstring
def _map_lazy_chunked_unordered(self, target, iterable, chunk_size, buffer_size, timeout):
...
No docstring
def _map_lazy_unordered(self, target, iterable, buffer_size, timeout):
...
No docstring
def _notify_workers_to_shutdown(self):
...
No docstring
def _on_worker_exception(self, worker_id, exc):
...
No docstring
def _on_worker_shutdown(self, worker_id):
...
No docstring
def _setup(self):
...
No docstring
def _shutdown_filter_thread(self):
...
No docstring
def _shutdown_message_thread(self):
...
No docstring
def _spawn_filter_thread(self):
...
No docstring
def _spawn_message_thread(self):
...
No docstring
def _spawn_workers(self, n=None):
...
No docstring
def _submit_task(self, target, *args, **kwargs):
...
No docstring
def _update_future(self, message):
...