I'm trying to understand how I can to realize multiprocessing in my case. I have two functions: def all_url() and def new_file(). The first one returns a list which contains a lot of elements. The second one uses this list for 'for' loop. I want to make multiprocessing for second function new_file() because list which returns from all_url() is too big.
Example of code:
def all_url():
#Here I append elements to urllist
return urllist
def new_file():
for each in all_url():
#There's a lot of code. Each iteration of loop creates a new html file.
new_file()
You would need to do something like this:
from multiprocessing.pool import Pool
def all_url():
#Here I append elements to urllist
return urllist
def new_file():
with Pool() as pool:
pool.map(new_file_process_url, all_url())
def new_file_process_url(each):
# Creates html file.
if __name__ == '__main__':
new_file()