i read many explanation on fastest minimum-width integer types but i couldn't understand when to use these data types.
My understanding : On 32-bit machine,
uint_least16_t could be typedef to an unsigned short.
1. uint_least16_t small = 38;
2. unsigned short will be of 16 bits so the value 38 will be stored using 16 bits. And this will take up 16 bits of memory.
3. The range for this data type will be 0 to (2^N)-1 , here N=16.
uint_fast16_t could be typedef to an unsigned int.
1. uint_fast16_t fast = 38;
2. unsigned int will be of 32 bits so the value 38 will be stored using 32 bits. And this will take up 32 bits of memory.
3. what will be the range for this data type ?
uint_fast16_t => uint_fastN_t , here N = 16
but the value can be stored in 32 bits so IS it 0 to (2^16)-1 OR 0 to (2^32)-1 ?
how can we make sure that its not overflowing ?
Since its a 32 bit, Can we assign >65535 to it ?
If it is a signed integer, how signedness is maintained.
For example int_fast16_t = 32768;
since the value falls within the signed int range, it'll be a positive value.
A uint_fast16_t
is just the fastest unsigned data type that has at least 16 bits. On some machines it will be 16 bits and on others it could be more. If you use it, you should be careful because arithmetic operations that give results above 0xFFFF could have different results on different machines.
On some machines, yes, you will be able to store numbers larger than 0xFFFF in it, but you should not rely on that being true in your design because on other machines it won't be possible.
Generally the uint_fast16_t
type will either be an alias for uint16_t
, uint32_t
, or uint64_t
, and you should make sure the behavior of your code doesn't depend on which type is used.
I would say you should only use uint_fast16_t
if you need to write code that is both fast and cross-platform. Most people should stick to uint16_t
, uint32_t
, and uint64_t
so that there are fewer potential issues to worry about when porting code to another platform.
Here is an example of how you might get into trouble:
bool bad_foo(uint_fast16_t a, uint_fast16_t b)
{
uint_fast16_t sum = a + b;
return sum > 0x8000;
}
If you call the function above with a
as 0x8000 and b
as 0x8000, then on some machines the sum
will be 0 and on others it will be 0x10000, so the function could return true or false. Now, if you can prove that a
and b
will never sum to a number larger than 0xFFFF, or if you can prove that the result of bad_foo
is ignored in those cases, then this code would be OK.
A safer implementation of the same code, which (I think) should behave the same way on all machines, would be:
bool good_foo(uint_fast16_t a, uint_fast16_t b)
{
uint_fast16_t sum = a + b;
return (sum & 0xFFFF) > 0x8000;
}