The endianness of bitfields is implementation defined. Is there a way to check, at compile time, whether via some macro or other compiler flag, what gcc's bitfield endianness actually is?
In other words, given something like:
struct X {
uint32_t a : 8;
uint32_t b : 24;
};
Is there a way for me to know at compile time whether or not a
is the first or last byte in X
?
On Linux systems, you can check the __BYTE_ORDER
macro to see if it is __LITTLE_ENDIAN
or __BIG_ENDIAN
. While this is not authoritative, in practice it should work.
A hint that this is the right way to do it is in the definition of struct iphdr
in netinet/ip.h, which is for an IP header. The first byte contains two 4-bit fields which are implemented as bitfields, so the order is important:
struct iphdr
{
#if __BYTE_ORDER == __LITTLE_ENDIAN
unsigned int ihl:4;
unsigned int version:4;
#elif __BYTE_ORDER == __BIG_ENDIAN
unsigned int version:4;
unsigned int ihl:4;
#else
# error "Please fix <bits/endian.h>"
#endif
u_int8_t tos;
u_int16_t tot_len;
u_int16_t id;
u_int16_t frag_off;
u_int8_t ttl;
u_int8_t protocol;
u_int16_t check;
u_int32_t saddr;
u_int32_t daddr;
/*The options start here. */
};