I was trying to convert an integer into its equivalent binary representation.
I was using the following algorithm
void decimal_to_binary(uint32_t number)
{
char bitset[32];
for(uint32_t i=0; i<32; ++i)
{
if((number & (1 << i)) != 0)
{
bitset[31-i] = '1';
}
else
{
bitset[31-i] = '0';
}
}
for(uint32_t i=0; i<32; ++i)
{
cout << bitset[i];
}
cout << "\n";
}
When I run this function against say for instance '5' declared as uint32_t I get the right results
decimal_to_binary(5)
00000000000000000000000000000101
But when I declare the number as uint64_t and also change the size of bitset to 64 bits the results are quite different
Adding the code to do the same
void decimal_to_binary(uint64_t number)
{
char bitset[64];
for(uint64_t i=0; i<64; ++i)
{
if((number & (1 << i)) != 0)
{
bitset[63-i] = '1';
}
else
{
bitset[63-i] = '0';
}
}
for(uint64_t i=0; i<64; ++i)
{
cout << bitset[i];
}
cout << "\n";
}
decimal_to_binary(5)
0000000000000000000000000000010100000000000000000000000000000101
I see the same results as the one I got in uint32 but placed one beside the other.
This got me wondering as to how is an uint64_t implemented in a programming language like CPP??
I tried to get some more details by looking at the stdint header file but the link there did help me out much.
Thanks in advance for your time!!
the (1 << i) in your 64 bit code might be using a regular 32-bit int for the 1. (default word size)
So the 1 is shifted out completely. I don't understand how this produces the output you supplied though :)
Use 1ull for the constant (unsigned long long)