Just call Executor.shutdown:

shutdown(wait=True)

Signal the executor that it should free any resources that it is using when the currently pending futures are done executing. Calls to Executor.submit() and Executor.map() made after shutdown will raise RuntimeError.

If wait is True then this method will not return until all the pending futures are done executing and the resources associated with the executor have been freed.

However if you keep track of your futures in a list then you can avoid shutting the executor down for future use using the futures.wait() function:

concurrent.futures.wait(fs, timeout=None, return_when=ALL_COMPLETED)

Wait for the Future instances (possibly created by different Executor instances) given by fs to complete. Returns a named 2-tuple of sets. The first set, named done, contains the futures that completed (finished or were cancelled) before the wait completed. The second set, named not_done, contains uncompleted futures.

note that if you don't provide a timeout it waits until all futures have completed.

You can also use futures.as_completed() instead, however you'd have to iterate over it.

Answer from Bakuriu on Stack Overflow
Top answer
1 of 3
102

Just call Executor.shutdown:

shutdown(wait=True)

Signal the executor that it should free any resources that it is using when the currently pending futures are done executing. Calls to Executor.submit() and Executor.map() made after shutdown will raise RuntimeError.

If wait is True then this method will not return until all the pending futures are done executing and the resources associated with the executor have been freed.

However if you keep track of your futures in a list then you can avoid shutting the executor down for future use using the futures.wait() function:

concurrent.futures.wait(fs, timeout=None, return_when=ALL_COMPLETED)

Wait for the Future instances (possibly created by different Executor instances) given by fs to complete. Returns a named 2-tuple of sets. The first set, named done, contains the futures that completed (finished or were cancelled) before the wait completed. The second set, named not_done, contains uncompleted futures.

note that if you don't provide a timeout it waits until all futures have completed.

You can also use futures.as_completed() instead, however you'd have to iterate over it.

2 of 3
36

As stated before, one can use Executor.shutdown(wait=True), but also pay attention to the following note in the documentation:

You can avoid having to call this method explicitly if you use the with statement, which will shutdown the Executor (waiting as if Executor.shutdown() were called with wait set to True):

import shutil
with ThreadPoolExecutor(max_workers=4) as e:
    e.submit(shutil.copy, 'src1.txt', 'dest1.txt')
    e.submit(shutil.copy, 'src2.txt', 'dest2.txt')
    e.submit(shutil.copy, 'src3.txt', 'dest3.txt')
    e.submit(shutil.copy, 'src4.txt', 'dest4.txt')
🌐
Super Fast Python
superfastpython.com › home › tutorials › how to wait for all tasks to finish in the threadpoolexecutor
How to Wait For All Tasks to Finish in the ThreadPoolExecutor - Super Fast Python
September 12, 2022 - You can wait for a task to finish in a ThreadPoolExecutor by calling the wait() module function. In this tutorial you will discover how to wait for tasks to finish in a Python thread pool.
Discussions

ThreadPoolExecutor problem with recursive tasks
I'm attempting to have a ThreadPoolExecutor run some tasks in parallel, where each task may spawn more tasks when it completes that are also… More on reddit.com
🌐 r/learnpython
10
4
August 24, 2024
python - How do I wait for ThreadPoolExecutor.map to finish - Stack Overflow
Noting that @milkice's answer is ... to call wait, but it will also close the pool. 2021-12-20T16:42:57.217Z+00:00 ... It does in fact look like the context manager will close the pool by calling shutdown, but shutdown also changes executor._shutdown, which will cause all threads to stop scanning ... More on stackoverflow.com
🌐 stackoverflow.com
How to Make Python Wait
Manually dealing with threads and processes is useful if you want to build a framework or a very complex workflow. But chances are you just want to run stuff concurrently, in the background · In that case (which is most people case), you really want to use one of the stdlib pools: it takes ... More on news.ycombinator.com
🌐 news.ycombinator.com
31
167
December 25, 2019
python - How do I wait when all ThreadPoolExecutor threads are busy? - Stack Overflow
My understanding of how a ThreadPoolExecutor works is that when I call #submit, tasks are assigned to threads until all available threads are busy, at which point the executor puts the tasks in a q... More on stackoverflow.com
🌐 stackoverflow.com
August 17, 2022
🌐
Python
docs.python.org › 3 › library › concurrent.futures.html
concurrent.futures — Launching parallel tasks — Python 3.14 ...
January 30, 2026 - The asynchronous execution can be performed with threads, using ThreadPoolExecutor or InterpreterPoolExecutor, or separate processes, using ProcessPoolExecutor.
🌐
Tiew Kee Hui's Blog
tiewkh.github.io › blog › python-thread-pool-executor
Shutting Down Python's ThreadPoolExecutor
July 10, 2021 - When wait = True, Python will wait until all submitted tasks have finished running before shutting down the ThreadPoolExecutor. When wait = False, it will still behave in the same way.
🌐
GeeksforGeeks
geeksforgeeks.org › python › how-to-use-threadpoolexecutor-in-python3
How to use ThreadPoolExecutor in Python3 ? - GeeksforGeeks
July 23, 2025 - cancel_futures=True then the executor will cancel all the future threads that are yet to start. ... The below code demonstrates the use of ThreadPoolExecutor, notice unlike with the threading module we do not have to explicitly call using a loop, keeping a track of thread using a list or wait for threads using join for synchronization, or releasing the resources after the threads are finished everything is taken under the hood by the constructor itself making the code compact and bug-free.
🌐
Medium
medium.com › @superfastpython › python-threadpoolexecutor-7-day-crash-course-78d4846d5acc
Python ThreadPoolExecutor: 7-Day Crash Course | by Super Fast Python | Medium
December 3, 2023 - We could wait for all tasks to be completed via the wait() function or respond to tasks as they complete via the as_completed() function. An alternate way is to register a callback function to handle the result.
🌐
Medium
blog.wahab2.com › python-threadpoolexecutor-use-cases-for-parallel-processing-3d5c90fd5634
Python ThreadPoolExecutor: Use Cases for Parallel Processing | by Abdul Rafee Wahab | Medium
August 23, 2023 - We then submit each computation task to the thread pool using the submit() function and store the resulting Future objects in a list called futures. We then use the as_completed() function to wait for all tasks to complete and retrieve the results.
🌐
Super Fast Python
superfastpython.com › home › tutorials › wait() vs. as_completed() with the threadpoolexecutor in python
wait() vs. as_completed() With the ThreadPoolExecutor in Python - Super Fast Python
September 12, 2022 - Use the wait() module function to wait for one or all tasks to complete. The ThreadPoolExecutor in Python provides a pool of reusable threads for executing ad hoc tasks.
Find elsewhere
🌐
Reddit
reddit.com › r/learnpython › threadpoolexecutor problem with recursive tasks
r/learnpython on Reddit: ThreadPoolExecutor problem with recursive tasks
August 24, 2024 - If this is a requirement, and they cannot be queued up, you will need to look at the ThreadPoolExecutor class and tinker with its internals, or do it on your own with base threading.thread. To know when the whole thing is done, use the ThreadPoolExecutor as a context manager, once you exit the context manager all of your processes are done.
Top answer
1 of 3
16

The call to ThreadPoolExecutor.map does not block until all of its tasks are complete. Use wait to do this.

Copyfrom concurrent.futures import wait, ALL_COMPLETED
...

futures = [pool.submit(fn, args) for args in arg_list]
wait(futures, timeout=whatever, return_when=ALL_COMPLETED)  # ALL_COMPLETED is actually the default
do_other_stuff()

You could also call list(results) on the generator returned by pool.map to force the evaluation (which is what you're doing in your original example). If you're not actually using the values returned from the tasks, though, wait is the way to go.

2 of 3
12

It's true that Executor.map() will not wait for all futures to finish. Because it returns a lazy iterator like @MisterMiyagi said.

But we can accomplish this by using with:

Copyimport time

from concurrent.futures import ThreadPoolExecutor

def hello(i):
    time.sleep(i)
    print(i)

with ThreadPoolExecutor(max_workers=2) as executor:
    executor.map(hello, [1, 2, 3])
print("finish")

# output
# 1
# 2
# 3
# finish

As you can see, finish is printed after 1,2,3. It works because Executor has a __exit__() method, code is

Copydef __exit__(self, exc_type, exc_val, exc_tb):
    self.shutdown(wait=True)
    return False

the shutdown method of ThreadPoolExecutor is

Copydef shutdown(self, wait=True, *, cancel_futures=False):
    with self._shutdown_lock:
        self._shutdown = True
        if cancel_futures:
            # Drain all work items from the queue, and then cancel their
            # associated futures.
            while True:
                try:
                    work_item = self._work_queue.get_nowait()
                except queue.Empty:
                    break
                if work_item is not None:
                    work_item.future.cancel()

        # Send a wake-up to prevent threads calling
        # _work_queue.get(block=True) from permanently blocking.
        self._work_queue.put(None)
    if wait:
        for t in self._threads:
            t.join()
shutdown.__doc__ = _base.Executor.shutdown.__doc__

So by using with, we can get the ability to wait until all futures finish.

🌐
how.wtf
how.wtf › how-to-wait-for-all-threads-to-finish-in-python.html
How to wait for all threads to finish in Python | how.wtf
December 25, 2023 - Firing an forgetting is useful, but returning results was my use-case. Using ThreadPoolExecutor or ProcessPoolExecutor makes this real easy.
🌐
Alexwlchan
alexwlchan.net › 2019 › adventures-with-concurrent-futures
Adventures in Python with concurrent.futures – alexwlchan
In this case, I’m waiting for the first Future. When it’s done, we’ll go ahead and schedule the next one. ... import concurrent.futures import itertools tasks_to_do = get_tasks_to_do() with concurrent.futures.ThreadPoolExecutor() as executor: # Schedule the first N futures. We don't want to schedule them all # at once, to avoid consuming excessive amounts of memory.
🌐
Hacker News
news.ycombinator.com › item
How to Make Python Wait | Hacker News
December 25, 2019 - Manually dealing with threads and processes is useful if you want to build a framework or a very complex workflow. But chances are you just want to run stuff concurrently, in the background · In that case (which is most people case), you really want to use one of the stdlib pools: it takes ...
🌐
DigitalOcean
digitalocean.com › community › tutorials › how-to-use-threadpoolexecutor-in-python-3
How To Use ThreadPoolExecutor in Python 3 | DigitalOcean
June 23, 2020 - A with statement is used to create a ThreadPoolExecutor instance executor that will promptly clean up threads upon completion. Four jobs are submitted to the executor: one for each of the URLs in the wiki_page_urls list. Each call to submit returns a Future instance that is stored in the futures list. The as_completed function waits for each Future get_wiki_page_existence call to complete so we can print its result.
🌐
Python Engineer
python-engineer.com › posts › threadpoolexecutor
How to use ThreadPoolExecutor in Python - Python Engineer
May 2, 2022 - ThreadPoolExecutor provides an interface that abstracts thread management from users and provides a simple API to use a pool of worker threads. It can create threads as and when needed and assign tasks to them. In I/O bound tasks like web scraping, while an HTTP request is waiting for the response, another thread can be spawned to continue scraping other URLs.
🌐
Python
docs.python.org › 3.3 › library › concurrent.futures.html
17.4. concurrent.futures — Launching parallel tasks — Python 3.3.7 documentation
import time def wait_on_b(): time.sleep(5) print(b.result()) # b will never complete because it is waiting on a. return 5 def wait_on_a(): time.sleep(5) print(a.result()) # a will never complete because it is waiting on b. return 6 executor = ThreadPoolExecutor(max_workers=2) a = executor.submit(wait_on_b) b = executor.submit(wait_on_a) ... def wait_on_future(): f = executor.submit(pow, 5, 2) # This will never complete because there is only one worker thread and # it is executing this function.
Top answer
1 of 2
3

One approach might be to keep track of your currently running threads via a set of Futures:

    active_threads = set()
    def pop_future(future):
        active_threads.pop(future)

    with concurrent.futures.ThreadPoolExecutor(max_workers=CONCURRENCY) as executor:
        while True:
            while len(active_threads) >= CONCURRENCY:
                time.sleep(0.1)  # or whatever
            message = pull_from_queue()
            future = executor.submit(do_work_for_message, message)    
            active_threads.add(future)
            future.add_done_callback(pop_future)

A more sophisticated approach might be to have the done_callback be the thing that triggers a queue pull, rather than polling and blocking, but then you need to fall back to polling the queue if the workers manage to get ahead of it.

2 of 2
2

Base on @Samwise's answer (https://stackoverflow.com/a/73396000/8388869), I have expand the ThreadPoolExecutor

import time
from concurrent.futures import Future, ThreadPoolExecutor


class AvailableThreadPoolExecutor(ThreadPoolExecutor):
    """ThreadPoolExecutor that keeps track of the number of available workers.

    Refs:
        inspired by https://stackoverflow.com/a/73396000/8388869
    """

    def __init__(
        self, max_workers=None, thread_name_prefix="", initializer=None, initargs=()
    ):
        super().__init__(max_workers, thread_name_prefix, initializer, initargs)
        self._running_worker_futures: set[Future] = set()

    @property
    def available_workers(self) -> int:
        """the number of available workers"""
        return self._max_workers - len(self._running_worker_futures)

    def wait_for_available_worker(self, timeout: float | None = None) -> None:
        """wait until there is an available worker

        Args:
            timeout: the maximum time to wait in seconds. If None, wait indefinitely.

        Raises:
            TimeoutError: if the timeout is reached.
        """

        start_time = time.monotonic()
        while True:
            if self.available_workers > 0:
                return
            if timeout is not None and time.monotonic() - start_time > timeout:
                raise TimeoutError
            time.sleep(0.1)

    def submit(self, fn, /, *args, **kwargs):
        f = super().submit(fn, *args, **kwargs)
        self._running_worker_futures.add(f)
        f.add_done_callback(self._running_worker_futures.remove)
        return f

It should work like that:

with AvailableThreadPoolExecutor(max_workers=CONCURRENCY) as executor:
    while True:
        executor.wait_for_available_worker()
        message = pull_from_queue()
        executor.submit(do_work_for_message, message)
🌐
pytz
pythonhosted.org › futures
concurrent.futures — Asynchronous computation — futures 2.1.3 documentation
print(f.result()) executor = ThreadPoolExecutor(max_workers=1) executor.submit(wait_on_future) class concurrent.futures.ThreadPoolExecutor(max_workers)¶ · Executes calls asynchronously using at pool of at most max_workers threads.