I need to know what is the correct solution for the minimum bits required to store an unsigned int. Say, I have 403 its binary representation as an unsigned int will be 00000000000000000000000110010011 that adds up to 32 bit. Now, I know that an unsigned integer takes 32 bits to store. But, why do we have all those zeros in front when the number can be explained by only 9 bits 110010011. Moreover, How come unsigned int takes 32 bits to store and decimal takes only 8 bits ? Please explain in detail. Thanks
This has nothing to do with how many bits are needed, and everything to do with how many bits your computer is wired for (32). Though 9 bits are enough, your computer has data channels that are 32 bits wide - it's physically wired to be efficient for 32, 64, 128, etc. And your compiler presumably chose 32 bits for you.
The decimal representation of "403" is three digits, and to represent each digit in binary requires at least four bits (2^4 is 16, so you have 6 spare codes); so the minimum "decimal" representation of "403" requires 12 bits, not eight.
However, to represent a normal character (including the decimal digits as well as alpha, punctuation, etc) it's common to use 8 bits, which allows up to 2^8 or 256 possible characters. Represented this way, it takes 3x8 or 24 binary bits to represent 403.