Why is something as fundamental as the number of bits in a byte, been kept implementation-defined by C standard? Are there examples where this could be useful?
from C99 , 3.6 ( available here link)
3.6 byte
addressable unit of data storage large enough to hold any member of the basic character set of the execution environment
NOTE 1 It is possible to express the address of each individual byte of an object uniquely.
NOTE 2 A byte is composed of a contiguous sequence of bits, the number of which is implementation defined. The least significant bit is called the low-order bit; the most significant bit is called the high-order bit.
EDIT: I was asking something fundamental why C standard has given flexibility in the number of bits in the size of a byte. Not asking about the sizeof(char) more specifically what is the benefit of having CHAR_BIT != 8. If the question still seems duplicate please down-vote it and i will close the question.
Many older machines and current-day DSPs have larger bytes (as in: they can only address memory only in multiples of - say - 16 bits). Surely you don't want to leave out an important segment of the embedded world.