Search code examples
socketswinsock2iocp

How close a socket (with IOCP) properly after sending?


I'm having problem with Winsock2 using IOCP (overlapped IO mode) when I need to close the connection after sending the requested data.

I've discovered if I send some data and close the socket immediately after sending, then the data will not be (or partially will be) sent, because there is no time to send out the packet before closing.

I've also discovered there are multiple disconnection methods, like WSA_SendDisconnect(), which I use for now.

Now the current implementation is something like this:

  1. Sending out the data

  2. Setting a flag that represents the intention to close the connection after sending

  3. When IOCP event happens regarding the send event if the flag is set and bytecount > 0, then I send again an empty buffer, just to make sure all data has been sent

  4. When IOCP event happens regarding the send event if the flag is set and bytecount == 0 then I clear the flag and call WSA_SendDisconnect() with empty buffer, to send disconnect message

  5. When IOCP event happens regarding the send event without the flag set and bytecount == 0 then I call closesocket() and destroy the context.

With this procedure I could make it almost sure the socket will disconnect only after all data has been sent, actually it works very well, but when I stress test the program in some very rare cases 10-15 times out of 1000000 tests something is wrong. (It would be very hard to determine the exact problem, the program works as a web server, and I'm stresstesting it with apachebench and siege, after the test I get a summary where I see sometimes 10-15 failed requests.)

I was pretty sure the failure is because the socket can not send the entire packet before closing, so I've placed a little delay before socketclose() and since that I get zero failed requests, but of course, this is not a solution, just a way to filter out what may cause the problem.

The method I've implemented obviously lacks a proper way to detect if the socket has sent everything before closing, even though I start closing only after the last dummy write has finished.

What would be the best solution to close the socket only after all sending buffer has actually be written to the network?

Ps: I've tried playing with NoDelay and setting LingerState, they did not help.


Solution

  • Ok, I can surely say this problem is solved after a lot of testing, this solution is working fine in my webserver for months without any issue in every circumstances. The anwers were helpful, but the final solution is the following:

            try
            {
                socket.Shutdown(SocketShutdown.Both);
            }
            catch (SocketException ex)
            {
                // handle exception
            }
            finally
            {
                socket?.Close();
            }
    

    The "trick" is to initiate a disconnect with SocketShutdown.Both and finally close the socket if it still exists. I've noticed sometimes (I really don't know why) the shutdown method automatically disposed the socket, so the socket.Close() threw a NullReferenceException, but sometimes not. I worked this issue around with a ? after the socket object referece, so if it is null, then the Close() method just will not be called.

    It works fine on Windows and also in Linux (debian) in dotnet core.

    Why only this works? Don't know, but with this I'm not having failed requests, packet loss, or improperly closed connections. Just works.