I'm trying to use a struct to pack multiple smaller integers into a uint32_t
.
struct PackedData {
PackedData & operator=(uint32_t x) {
a = (x >> 24) & 0xFF;
b = (x >> 8) & 0xFFFF;
c = x & 0xFF;
return *this;
}
uint32_t operator=(PackedData x) {
uint32_t temp;
temp = (x.a << 24) & 0xFF000000;
temp |= (x.b << 8) & 0xFFFF00;
temp |= x.c & 0xFF;
return temp;
}
uint32_t a : 8;
uint32_t b : 16;
uint32_t c : 8;
};
Originally, I had hoped to inherit from a uint32_t
, similar to what you can do with an enum
class.
struct PackedData : uint32_t {
Unfortunately that does not compile.
The weird thing is that the struct I've pasted does compile, and doesn't even raise warnings with -Wall -Wextra --pedantic
.
On the bright-side, I can see that my struct is 4 bytes, and I am able to assign a 32-bit integer to the struct. However, I cannot assign the struct to a 32-bit integer.
I'm sure the syntax for the following method is completely wrong, but I don't know how and the compiler isn't helping me.
uint32_t operator=(PackedData x) {
uint32_t temp;
temp = (x.a << 24) & 0xFF000000;
temp |= (x.b << 8) & 0xFFFF00;
temp |= x.c & 0xFF;
return temp;
}
Can this idea even work, or is there a better way?
You can define an implicit conversion operator as follows:
struct PackedData {
// ...
operator uint32_t () const {
return (a << 24) | (b << 8) | c;
}
};
Now you can do:
PackedData p { 0xef, 0x1234, 0xab };
uint32_t x = p;
The value of x
is 0xef1234ab
.
Note that the masking when converting from the bit-fields is not necessary. The compiler already takes care of that internally.
If it's super important that your struct is always the correct size due to assumptions elsewhere in a larger project, it's generally a good idea to add a static assertion. The most appropriate place to do that is typically following the class definition:
struct PackedData {
// ...
};
static_assert(sizeof(PackedData) == sizeof(uint32_t));