Search code examples
pythonpython-itertools

Python: Iterate over many large files simultaneusly, get every k-th line


As in the title - I have many very large text files (>10GB) that have the same, repetitive structure. I would like to filter some information out so I would like to yield every k-th line from them but iterating over them all at the same time. I have tried itertools: islice and izip, but I cannot put them together...


Solution

  • Given that you talk about using itertools.izip(), I'm going to assume you are using Python 2 here.

    Use itertools.islice() to facilitate skipping lines from files, and the itertools.izip_longest() function to lazily combine reading in parallel as well as handle files that are shorter:

    from itertools import islice, izip_longest
    
    filenames = [fname1, fname2, fname3]
    open_files = [open(fname) for fname in filenames]
    kth_slice_files = (islice(f, None, None, k) for f in open_files)
    try:
        for kth_lines in izip_longest(*kth_slice_files, fillvalue=''):
            # do something with those combined lines
    

    islice(fileobj, None, None, k) will start at the first line, then skip k - 1 lines to give you the 1 + k, then 1 + 2*k, etc. lines. If you need to start at a later line, replace the first None with that starting value.