In the example code, I would like to run 4 functions in parallel and return list values for each. Is the multiprocessing package appropriate for this task? If so how do I implement it?
Example Code:
from multiprocessing import Pool
def func_a(num):
return([1+num,2+num,3+num])
def func_b(num):
return([10+num,11+num,12+num])
def func_c(num):
return([20+num,21+num,22+num])
def func_d(num):
return([30+num,31+num,32+num])
if __name__ == '__main__':
pool = Pool(processes=2)
list_a = ???
list_b = ???
list_c = ???
list_d = ???
full_list = []
for item in list_a:
full_list.append(item)
for item in list_b:
full_list.append(item)
for item in list_c:
full_list.append(item)
for item in list_d:
full_list.append(item)
Any information much appreciated. Thanks in advance.
As explained in Process Pools, you need to submit all of the jobs to the pool, and then wait for all of the results.
I'm not sure what arguments you want to pass to these functions, since it isn't in your question or your code, but I'll just make up something arbitrary.
if __name__ == '__main__':
pool = Pool(processes=2)
result_a = pool.apply_async(func_a, (23,))
result_b = pool.apply_async(func_b, (42,))
result_c = pool.apply_async(func_c, (fractions.Fraction(1, 2),))
result_d = pool.apply_async(func_a, (1j * math.pi,))
full_list = []
for item in result_a.get():
full_list.append(item)
for item in result_b.get():
full_list.append(item)
for item in result_c.get():
full_list.append(item)
for item in result_d.get():
full_list.append(item)
You can dramatically simplify this in multiple ways (e.g., each of those for
loops can be replaced by a single call to extend
, or you can just write full_list = result_a.get() + result_b.get() + result_c.get() + result_d.get())
, but this is the smallest change to your existing code that works. (And if you really want to simplify this code, I think you'd be happier with concurrent.futures.ProcessPoolExecutor
in the first place.)