Search code examples
c++boostprotocol-bufferszeromqnanomsg

NanoMsg (NNG) & FlatBuffers the correct fit for this project?


Shout out if there is something better we should consider:

I am looking for a very quick and simple way to get several programs (e.g. 5) - each running on separate nodes on a private OpenStack cloud to talk to each other.

  • Packets will be short C++ structs (less than 100 bytes)
  • Traffic will be light (probably less than 100/second)
  • Latency is really not an issue. (what is a few ms between friends?) - we have lots of cycles and memory
  • Messages should be done as pub/sub client/server paradigm
  • Library should be C++ friendly. But work both on Windows and Linux
  • We might need additional language bindings later on
  • We would prefer not to lose messages

Here is the first idea I have. But if you have something else to offer. Yell out.

Friendly Wrapper for UDP socket layer:

Encoder/Decoder for C++ struct data:


Solution

  • For serialisation, almost anything with the right language bindings will do. Google Protocol Buffers are language-agnostic, lots of bindings available. The only thing to avoid is serialisation that is built into your source code (like Boost's serialisation is / was), because then you can't readily port that to another language.

    For message transport, ZeroMQ, NanoMsg are good choices. However, I think it really comes down to

    1. How badly you don't want to lose messages,
    2. Exactly what you mean by "lost message" in the first place.

    The thing about ZeroMQ (and NanoMsg) is (AFAIK) there is no real way of knowing the fate of a message when a fault occurs. For instance, in ZeroMQ, if you send a message and the recipient just happens to be working and connected, the message gets transferred over the connection. The sending end now thinks that the job is done, the message has been delivered. However, unless and until the receiving end actually calls zmq_recv() and fully processes what it gets given, the message can still get lost if the receiving end process crashes, of there is a power failure, etc. This is because until it is consumed the message is stored in RAM inside the ZeroMQ run thread ( inside the respective Context()-instance's domain of control ).

    You can account for this by having some sort of ack message heading back the other way, timeouts, etc. But that starts getting fiddly, and you'd be better off with something like RabbitMQ.