I am using multiprocessing.Process
to run functions in multiple threads at a time and got an issue about too big usage of cpu and ram. It is using more and more with time the script is working.
so I found a solution for this that recommends to use maxtasksperchild=1
with multiprocessing.Pool
method.
so I am curios if it is possible to do the same with multiprocessing.Process
method? or I should vetter change it to multiprocessing.Pool
in all the code?
right now I am starting function like this:
import multiprocessing
def main():
pass #some code
alg=[]
for kish in range(50):
gf=multiprocessing.Process(target=main,args=(kish,proxys,proxystr,))
gf.start()
alg.append(gf)
for i in alg:
i.join()
Although I am not certain if switching to using Pool
with maxtasksperchild=1
is going necessarily provide the performance increase you think it will... Using your current example code, you can switch to using multiprocess.Pool
with some very minor adjustments.
for example:
import multiprocessing
def main():
pass #some code
if __name__ == "__main__":
with multiprocess.Pool(processes=50, maxtasksperchild=1) as pool:
pool.apply_async(main,(kish,proxys,proxystr,))