In Python, I can open a file with f= open(<filename>,<permissions>)
. This returns an object f
which I can write to using f.write(<some data>)
.
If, at this point, I access the original final (eg with cat
from a terminal), it appears empty: Python stored the data I wrote to the object f
and not the actual on-disk file. If I then call f.close()
, the data in f
is persisted to the on-disk file (and I can access it from other programs).
I assume data is buffered to improve latency. However, what happens if the buffered data grows a lot? Will Python initiate a write? If so, details on the internals (what influences the buffer size? is the disk I/O handled within Python or by another program/thread? is there a chance Python will just hang during the write?) would be much appreciated.
The general subject of I/O buffering has been treated many times (including in questions linked from the comments). But to answer your specific questions:
stdio
, so it chooses its own buffer sizes. (A few kB is typical.)