Search code examples
pythonfile-ioio-buffering

What is the difference between the buffering argument to open() and the hardcoded readahead buffer size used when iterating through a file?


Inspired by this question, I'm wondering exactly what the optional buffering argument to Python's open() function does. From looking at the source, I see that buffering is passed into setvbuf to set the buffer size for the stream (and that it does nothing on a system without setvbuf, which the docs confirm).

However, when you iterate over a file, there is a constant called READAHEAD_BUFSIZE that appears to define how much data is read at a time (this constant is defined here).

My question is exactly how the buffering argument relates to READAHEAD_BUFSIZE. When I iterate through a file, which one defines how much data is being read off disk at a time? And is there a place in the C source that makes this clear?


Solution

  • READAHEAD_BUFSIZE is only used when you use the file as an iterator:

    for line in fileobj:
        print line
    

    It is a separate buffer from the normal buffer argument, which is handled by the fread C API calls. Both are used when iterating.

    From file.next():

    In order to make a for loop the most efficient way of looping over the lines of a file (a very common operation), the next() method uses a hidden read-ahead buffer. As a consequence of using a read-ahead buffer, combining next() with other file methods (like readline()) does not work right. However, using seek() to reposition the file to an absolute position will flush the read-ahead buffer.

    The OS buffer size is not changed, the setvbuf is done when the file is opened and not touched by the file iteration code. Instead, calls to Py_UniversalNewlineFread (which uses fread) are used to fill the read-ahead buffer, creating a second buffer internal to Python. Python otherwise leaves the regular buffering up to the C API calls (fread() calls are buffered; the userspace buffer is consulted by fread() to satisfy the request, Python doesn't have to do anything about that).

    readahead_get_line_skip() then serves lines (newline terminated) from this buffer. If the buffer no longer contains newlines, it'll refill the buffer by recursing over itself with a buffer size 1.25 times the previous value. This means that file iteration can read the whole rest of the file into the memory buffer if there are no more newline characters in the whole file!

    To see how much the buffer reads, print the file position (using fileobj.tell()) while looping:

    >>> with open('test.txt') as f:
    ...     for line in f:
    ...         print f.tell()
    ... 
    8192   # 1 times the buffer size
    8192
    8192
    ~ lines elided
    18432  # + 1.25 times the buffer size
    18432
    18432
    ~ lines elided
    26624  # + 1 times the buffer size; the last newline must've aligned on the buffer boundary
    26624
    26624
    ~ lines elided
    36864  # + 1.25 times the buffer size
    36864
    36864
    

    etc.

    What bytes are actually read from the disk (provided fileobj is an actual physical file on your disk) depend not only on the interplay between the fread() buffer and the internal read-ahead buffer; but also if the OS itself is using buffering. It could well be that even if the file buffer is exhausted, the OS serves the system call to read from the file from it's own cache instead of going to the physical disk.