Search code examples
c++socketsbit-fields

Bitfield value changes when sent over socket C++


I have a bitfield that looks like the following:

typedef struct __attribute__((__packed__)) MyStruct {
  unsigned int val1:14;
  unsigned int val2:1;
  unsigned int val3:1;
  unsigned int val4:1;
  unsigned int val5:1;
  unsigned short aFullShort;
  unsigned int aFullInt;
} MyStruct;

I am sending these values over the network and have noticed that sometimes the sender will think that val1 is set(verifiable by printing the value prior to send) but the receiver will not see that val1 is set. The code for transmission is as follows:

MyStruct* myStruct = new MyStruct(); //initialize fields here
sendto(sock, myStruct, sizeof(MyStruct), 0, ...);

The code for reading is as follows:

unsigned char theBuffer[sizeof(MyStruct)];
recvfrom(aSocket, &theBuffer, sizeof(theBuffer), 0, ...);

After reading in the bytes from the socket, I reinterpret cast the bytes to a MyStruct and perform endian conversion for aFullShort and aFullInt. The corruption occurs such that the receiver thinks that val1 is 0 when the sender set it to 1. Why might this happen? Might the compiler be inserting different padding for the sender and receiver? Do I need to worry about the endianness of the single bit values?


Solution

  • The compiler can lay out bit fields however it wants. It can randomize them on every execution if it wants to. There is absolutely no rule that prohibits this. If you want to serialize data in a predictable binary format that you can rely on, write code that does that.

    The sole exception would be if your compiler has some specified guarantee for packed structs and you are willing to confine yourself to only that compiler. You don't specify the compiler you're using, but I doubt that it does.

    There is really no reason to write code like this. You want to write code that is guaranteed to work by the relevant standard(s), not code that might happen to work if nothing happens to break the assumptions it makes.