Search code examples
c++network-programmingnonblockingdenial-of-serviceberkeley-sockets

How to Avoid DOS Attack using Berkeley Sockets in C++


I'm working my way through UNIX Network Programming Volume 1 by Richard Stevens and attempting to write a TCP Echo Client that uses the Telnet protocol. I'm still in the early stages and attempting to write the read and write functions.

I'd like to write it to use I/O Multiplexing and the Select function, because it needs to be multi-client and I don't want to try and tackle learning C++ threads while I'm trying to learn the Berkeley Sockets library at the same time. At the end of the chapter on I/O Multiplexing Stevens has a small section on DOS attacks where he says that the method I was planning on using is vulnerable to DOS attacks that simply send a single byte after connecting and then hang. He mentions 3 possible solutions afterwards - nonblocking IO, threading (out), and placing a timeout on the I/O operations.

My question is, are there any other ways of avoiding such an attack? And if not, which of these is the best? I glanced over the section on placing a timeout on the operations, but it doesn't look like something I want to do. The methods he suggests for doing it look pretty complex and I'm not sure how to work them into what I already have. I've only glanced at the chapter on NIO, it looks like it's the way to go right now, but I'd like to see if there are any other ways around this before I spend another couple of hours plowing through the chapter.

Any ideas?


Solution

  • ... are there any other ways of avoiding such an attack?

    Yes, asynchronous I/O is another general approach.

    If the problem is that a blocking read() may suspend your execution indefinitely, your general countermeasures are then:

    1. Have multiple threads of execution

      multi-threaded, multi-process, both.

    2. Time-limit the blocking operation

      e.g., instantaneous (non-blocking I/O), or not (SO_RCVTIMEO, alarm(), etc.)

    3. Operate asynchronously

      e.g., aio_read

    ... which of these is the best?

    For the newcomer, I'd suggest non-blocking I/O combined with a time-limited select()/poll(). Your application can keep track of whether or not a connection has generated "enough data" (e.g., an entire line) in a "short enough time."

    This is a powerful, mostly portable and common technique.

    However, the better answer is, "it depends." Platform support and, more importantly, design ramifications from these choices have to be assessed on a case-by-case basis.