I've recently decided to create a program that'll allow me to print out the exact bit pattern of an instance of any type in C++. I'm starting with the primitive built-in types. I've ran into an issue with printing the binary representation of a double
type.
Here's my code:
#include <iostream>
using namespace std;
void toBinary(ostream& o, char a)
{
const size_t size = sizeof(a) * 8;
for (int i = size - 1; i >= 0; --i){
bool b = a & (1UL << i);
o << b;
}
}
void toBinary(ostream& o, double d)
{
const size_t size = sizeof(d);
for (int i = 0; i < size; ++i){
char* c = reinterpret_cast<char*>(&d) + i;
toBinary(o, *c);
}
}
int main()
{
int a = 5;
cout << a << " as binary: ";
toBinary(cout, static_cast<char>(a));
cout << "\n";
double d = 5;
cout << d << " as double binary: ";
toBinary(cout, d);
cout << "\n";
}
My output is the following: 5 as binary: 00000101
5 as double binary: 0000000000000000000000000000000000000000000000000001010001000000
However, I know that 5 as a floating point representation is: 01000000 00010100 00000000 00000000 00000000 00000000 00000000 00000000
Maybe I'm not understanding something here, but doesn't the reinterpret_cast<char*>(&d) + i
line I've written allow me to treat a double*
as a char*
so that adding i
to it will advance the pointer by sizeof(char)
instead of sizeof(double)
. (Which is what I want here)? What am I doing wrong?
If you interpret a numeric type as a "byte sequence" you are exposed to the machine endianess: some platform store the most significant byte first, other do the reverse.
Just observe your number, in 8-bit groups, reading it from the last group towards the first and you get exactly what you expect.
Note that the same problem also happens with integers: 5 (in 32 bit) is
00000101-00000000-00000000-00000000
and not
00000000-00000000-00000000-00000101
as you wold expect.