Search code examples
c#streamionetwork-protocolsbuffered

Is Stream.Read buffered when doing network I/O?


So I was recently doing some work, when somebody told me that if doing a Stream.Read on a network stream that is obtained from calling one of .NET's GetResponseStream on a WebResponse or those are buffered.

He was saying that if you were to put a breakpoint, in the code where you're reading, you wouldn't stop the network traffic. I find that bizzare, but also hoping that it's true. How does that work? Is it even accurate?

using (Stream webResponseStream = this.webResponse.GetResponseStream())
{
   byte[] readBuffer = new byte[bufferSize];
   int bytesRead = webResponseStream.Read(readBuffer, 0, bufferSize);
   while (bytesRead > 0)
   {
        bytesRead = webResponseStream.Read(readBuffer, 0, bufferSize);
        // If I put a breakpoint here, does network activity stop?
   }
}

Solution

  • No, the Stream object returned by GetResponseStream is not buffered.

    Short answer to your second part (about setting a breakpoint) is that your co-worker is incorrect. Network traffic will stop, but eventually, and to describe "eventually", read on for more details.

    Bing for "SO_RCVBUF", "tcp receive window size", "vista auto scaling", for even more general information.

    Detailed Part

    Let's start with this, here's a textual view of the Windows networking stack:

    ++ .NET Network API's

    ++ --- Winsock DLL (user mode)

    ++ ------ afd.sys (kernel mode)

    ++ --------- tcpip.sys

    ++ ------------ ndis

    ++ --------------- network interface (hal)

    This is a rough stack, glossing over some details, but the general idea is that .NET calls into Winsock user-mode dll, which then pushes most of the real work to its cousin AFD (Ancillary Function Driver), onwards to the tcpip sub system, so on ..

    At the AFD level, there is a buffer, generally between 8K and 64K, but with Vista (and beyond), it can also scale up. This setting can also be controlled by a registry setting(HKLM\SYSTEM\CurrentControlSet\services\AFD\Parameters).

    In addition, the tcpip.sys also has a buffer, that is similar to AFD's buffer. I believe *SO_RCVBUF* setting passed when opening the socket can change this too.

    Essentially, when you are receiving data, tcpip.sys on your behalf keeps getting data, and keeps the telling the sender that it got the data (ACK's), and does so until its buffers are full. But at the same time, afd.sys is clearing tcpip.sys buffers by asking it for the data (which it then copies into its own buffer), so tcpip.sys can fill more data from the sender.

    And then there's you (the .NET API caller), who is also doing the same, calling the Read() method and copying data into your buffer.

    So, if you think about it, a 256Kb message coming over the wire, 64K sitting in the tcpip.sys buffer, 64K sitting in afd.sys buffer, and you set a breakpoint after asking for one 4K (your bufferSize variable) chunk, we're looking at 128K ACK'ed back to the sender as received, and since the tcpip.sys buffer is full (assuming 64K size) now (and you're blocked by your debugging session), tcpip.sys will no have option but to tell the sender to stop sending being bytes over the wire, because it can't process them quick enough.

    Practically (i.e. somebody not setting a breakpoint!), I've seen GC to induce such behavior. Seen a case of a 3 second garbage collection that let all the OS buffers fill up.