Search code examples
c++floating-pointc++14ieee-754

Floating point inverse binary representation


I'm confused with the following pieces of code:

int main()
{
    std::cout << std::bitset<32>(10.0f) << std::endl;
    std::cout << std::bitset<32>(-10.0f) << std::endl;

    float x = 10.0f;
    std::bitset<32> bsx(x);
    float y = -10.0f;
    std::bitset<32> bsy(y);
    std::cout << bsx << std::endl;
    std::cout << bsy << std::endl;
}

the second one just (yes, it's just a truncated first one):

int main()
{
    std::cout << std::bitset<32>(10.0f) << std::endl;
    std::cout << std::bitset<32>(-10.0f) << std::endl;
}

I'm getting the following outputs: the 1st one:

00000000000000000000000000001010
00000000000000000000000000000000
00000000000000000000000000001010
11111111111111111111111111110110

the 2nd:

00000000000000000000000000001010
00000000000000010010000001101000

The system is Clang 4.0.0 on macOS Sierra (10.12.3), floats are 32-bits here as I know.

(Compiled with g++ -pedantic -std=c++14.)

The only thing I'm aware of for floating-point numbers is the IEEE-754, which states there should be a separate sign bit. The last output of the first program seems to be using two's complement...

And the other ones are just totally confusing.. all zeros and somewhat random output, even with the same code, just truncated..

What do I misunderstand? Am I getting undefined behaviour somewhere in code? Don't understand the syntax? Or floats for some reason can't be represented as bitsets?


Solution

  • This code does not display the binary representation of floating point numbers, no. Because there are no std::bitset constructors that take a floating point numbers.