I have a structure that is intented to use 32 bits of storage:
struct foo_t {
color_t color : 10
some_type_t some_field : 22;
}
, where color_t
is an enum defined as
typedef enum {
RED = 0,
// other values...
BLUE = 255
} color_t
Note that color_t
values currently fit in 8 bits, although in the future we might add more values (thus we reserved 10 bits for color
)
In C99, I wonder if there is any guarantee that the width of color
will be respected by the compiler. As discussed in this question, the compiler might choose to represent color_t
as a char. At that point, the specified width appears incorrect according to the C99 spec:
The expression that specifies the width of a bit-field shall be an integer constant expression with a nonnegative value that does not exceed the width of an object of the type that would be specified were the colon and expression omitted.
How can I enforce that the color
field uses 10 bits, then? Note that the problem goes away if the compiler used a regular integer to represent color_t
, but this behavior cannot be assumed.
Add a final tag to your enum
definition:
typedef enum {
//...
DUMMY_FORCE_WIDTH = 0xffffffffu, // INT_MAX,
} color_t;
That has the added benefit of forcing the compiler / ABI to give your enum
enough space for growth everywhere.
Of couse, that presupposes that your compiler allows enum
's as bit-field types. It need not do so, though it has to diagnose it as a constraint-violation than:
6.7.2.1 Structure and union specifiers §5(constraints)
A bit-field shall have a type that is a qualified or unqualified version of _Bool, signed int, unsigned int, or some other implementation-defined type. It is implementation-defined whether atomic types are permitted.
If you want to be strictly conforming, define the bit-field of type signed or unsigned, not of your special enum type. Still, that only guarantees 16 bits length possible.