Search code examples
pythonpython-3.xconcurrent.futures

Python concurrent.futures - TypeError: zip argument #1 must support iteration


I want to process mongodb documents in a batch of 1000 using multiprocessing. However, below code snippet is giving TypeError: zip argument #1 must support iteration

Code:

def documents_processing(skip):
    conn = get_connection()
    db = conn["db_name"]

    print("Process::{} -- db.Transactions.find(no_cursor_timeout=True).skip({}).limit(10000)".format(os.getpid(), skip))
    documents = db.Transactions.find(no_cursor_timeout=True).skip(skip).limit(10000)
    # Do some processing in mongodb


max_workers = 20


def skip_list():
    for i in range(0, 100000, 10000):
        yield [j for j in range(i, i + 10000, 1000)]


def main_f():
    try:
        with concurrent.futures.ProcessPoolExecutor(max_workers=max_workers) as executor:
            executor.map(documents_processing, skip_list)
    except Exception:
        print("exception:", traceback.format_exc())

main_f()

Error traceback:

(rpc_venv) [user@localhost ver2_mt]$ python main_mongo_v3.py 
exception: Traceback (most recent call last):
  File "main_mongo_v3.py", line 113, in main_f
    executor.map(documents_processing, skip_list)
  File "/usr/lib64/python3.6/concurrent/futures/process.py", line 496, in map
    timeout=timeout)
  File "/usr/lib64/python3.6/concurrent/futures/_base.py", line 575, in map
    fs = [self.submit(fn, *args) for args in zip(*iterables)]
  File "/usr/lib64/python3.6/concurrent/futures/_base.py", line 575, in <listcomp>
    fs = [self.submit(fn, *args) for args in zip(*iterables)]
  File "/usr/lib64/python3.6/concurrent/futures/process.py", line 137, in _get_chunks
    it = zip(*iterables)
TypeError: zip argument #1 must support iteration

How to fix this error? Thanks.


Solution

  • Invoke the skip_list function to return the generator.

    Currently, you're passing a function as the second argument and not an iterable.

    executor.map(documents_processing, skip_list())
    

    Since you're retrieving 10k documents in each process starting at n, you can declare skip_list as:

    def skip_list():
        for i in range(0, 100000, 10000):
            yield i