Search code examples
c++tcpprotocol-buffersreliability

google protocol buffers - probability of bit errors and ways to reduce them


I transmit a fairly large amount of google protocol buffer msgs over a VPN over wireless over internet via TCP, and I feel like I get a relatively high error rate (e.g. a boolean field switching from false to true or sth similar). Something between 1 in 10,000 and 1 in 50,000.

Is that possible? Wikipedia states that TCP has a weak checksum, but that this is typically fixed in underlying protocols:

The TCP checksum is a weak check by modern standards. Data Link Layers with high bit error rates may require additional link error correction/detection capabilities. The weak checksum is partially compensated for by the common use of a CRC or better integrity check at layer 2, below both TCP and IP, such as is used in PPP or the Ethernet frame.

Does anyone have any experience what expected error rates should be? If above rate is possible, what would be the recommended / easiest way of fixing it? Duplicating the fields? Sending the message twice? Or is there something else that can be done to increase reliability?

Thanks


Solution

  • No it is not (reasonably) possible. Assuming that you are not suffering a hardware failure (of memory, your network card, etc.), which should be easy to check--does it happen on more than one computer?

    Much more likely is that you have an invalid memory access or the like within your application code, or that the data you are sending is simply not what you intended. Try running your code under valgrind or the like.

    The idea of duplicating fields as part of normal operation seems absurd: basically nobody does that in the wild, and you shouldn't need to either. There are multiple layers of protection against silent data corruption in your system, so it's most likely a simple (or maybe not-so-simple) application error.