I have a python code that uses the subprocess package to run in shell:
subprocess.call(mycode.py, shell=inshell)
When I execute the top command I see that I am only using ~30% or less of CPU. I realize some commands may be using disk and not cpu therefore I was timing the speed. The speed running this on a linux system seems slower than a mac 2 core system.
How do I parallelize this with threading or multiprocessing package so that I can use multiple CPU cores on said linux system?
A little change to FMc's answer,
work_items = [(1, 'A', True), (2, 'X', False), (3, 'B', False)]
def worker(tup):
for i in range(5000000):
print(work_items)
return
pool = Pool(processes = 8)
start = time.time()
work_results = pool.map(worker, work_items)
end = time.time()
print(end-start)
pool.close()
pool.join()
The code above takes 53.60 seconds. The trick below however, takes 27.34 seconds.
from multiprocessing import Pool
import functools
import time
work_items = [(1, 'A', True), (2, 'X', False), (3, 'B', False)]
def worker(tup):
for i in range(5000000):
print(work_items)
return
def parallel_attribute(worker):
def easy_parallelize(worker, work_items):
pool = Pool(processes = 8)
work_results = pool.map(worker, work_items)
pool.close()
pool.join()
from functools import partial
return partial(easy_parallelize, worker)
start = time.time()
worker.parallel = parallel_attribute(worker(work_items))
end = time.time()
print(end - start)
Two comments: 1) I didn't see much of a difference with using multiprocessing dummy 2) Using Python's partial function (scope with nesting) works like a wonderful wrapper that reduces the computation time by 1/2. Reference: https://www.binpress.com/tutorial/simple-python-parallelism/121
Also, Thank you FMc!