Search code examples
javainputstreamoutputstreambufferedwriterbufferedinputstream

How does buffered I/O reduce the overhead which would occur if unbuffered I/O was used?


From this tutorial,

Most of the examples we've seen so far use unbuffered I/O. This means each read or write request is handled directly by the underlying OS. This can make a program much less efficient, since each such request often triggers disk access, network activity, or some other operation that is relatively expensive.

To reduce this kind of overhead, the Java platform implements buffered I/O streams. Buffered input streams read data from a memory area known as a buffer; the native input API is called only when the buffer is empty. Similarly, buffered output streams write data to a buffer, and the native output API is called only when the buffer is full.

I understand that operations like disk-access, network activity etc. would cause memory or execution-time overhead for the underlying OS.

But the question is that how does the program reading-from/writing-to a certain memory area (the buffer) reduce this overhead?

I see this as an addition of a couple extra steps: First the program requests the OS to, e.g., read data from a file and write it to the buffer, and then the program reads it from the buffer.


Solution

  • As you probably know, IO operations with a disk drive, network connection, or another device are much slower than memory access. By buffering IO operations in memory, software can reduce the number of operations performed on the IO device.