The code below executes as expected by returning the total finish time as almost zero because it doesn't wait for the threads to finish every job.
import concurrent.futures
import time
start = time.perf_counter()
def do_something(seconds):
print(f'Sleeping {seconds} second(s)...')
time.sleep(seconds)
return f'Done Sleeping...{seconds}'
executor= concurrent.futures.ThreadPoolExecutor()
secs = [10, 4, 3, 2, 1]
fs = [executor.submit(do_something, sec) for sec in secs]
finish = time.perf_counter()
print(f'Finished in {round(finish-start, 2)} second(s)')
But with the with
command it does wait:
with concurrent.futures.ThreadPoolExecutor() as executor:
secs = [10, 4, 3, 2, 1]
fs = [executor.submit(do_something, sec) for sec in secs]
Why? What is the reason that with
has this behavior with multithreading?
Using a concurrent.futures.Executor
in a with
statement is equivalent to calling Executor.shutdown
after using it – causing the executor to wait for all tasks to complete. An Executor
used in a with
guarantees proper shutdown of concurrent tasks even if an error occurs inside the with
block.
Executor.shutdown(wait=True)
Signal the executor that it should free any resources that it is using when the currently pending futures are done executing. Calls to
Executor.submit()
andExecutor.map()
made after shutdown will raiseRuntimeError
.[...]
You can avoid having to call this method explicitly if you use the
with
statement, which will shutdown theExecutor
(waiting as ifExecutor.shutdown()
were called with wait set toTrue
): [...]