Search code examples
typesbitcore-foundationbitvectorcfbitvector

Why is CFBit defined as a UInt32?


Apple documents CFBit as being a UInt32, but I'm confused as to why. Doesn't that defeat the purpose of using a bit vector if each bit is defined with 32 bits? Am I missing something?


Solution

  • No, it isn't, because CFBit is not the storage-class of e.g. a CFBitVector. It's just used to compare whether a particular bit at a specific position in a bitvector is 0 or 1. There is not built-in type in the compiler (clang) to store individual bits (like it is in some compilers for embedded systems), therefore such kind of workaround is needed. Why exactly UInt32 is choosen for that purpose, I can't tell.

    Again: CFBitVector internally is NOT a vector of CFBit instances.