Search code examples
c++ieee-754

How can I convert an hexadecimal into a IEEE754 binary format?


I'm trying to understand how to convert a float (e.g. 5.1) into a binary form in C++ using the IEEE754 standard.

From my understanding the 5.1 in the IEEE754 should be represented using 0 10000000001 0100011001100110011001100110011001100110011001100110.

I have tried to obtain that format using the following function

std::string hexToBinaryIEEE754(const std::string& hexStr) {
    // Convert hex string to unsigned long long
    unsigned long long hexValue = std::stoull(hexStr, nullptr, 16);

    // Convert hex value to IEEE 754 binary representation
    std::bitset<64> bits(hexValue);
    return bits.to_string();
}

Unfortunately the value is returned is 0100000000010100011001100110011001100000000000000000000000000000

Any idea on how to return in the binary format that could be using in CVC5 in a function such as :

Term v =  slv->mkBitVector(32, <value>);

EDIT

I forgot to mention that the string passed as a parameter was the hex representation of the float/double number I wanted to convert.

e.g. for 5.1 the parameter was 4014666660000000


Solution

  • Assuming the usual common architectures of a machine being two's complement, big or little endian, and 8 bits to a byte. And if you can rely on the fact that the machine's internal storage of floats and doubles is indeed IEEE754, then you can likely get away with just enumerating over the bits at the address of the value.

    Something like this.

    bool isBigEndian() {
        return htons(1234) == 1234;   // include <winsock2.h> or <arpa/inet.h>
    }
    
    template<typename T>
    std::string numberToBitString(T value) {
        std::string s;
        const uint8_t len = sizeof(value);
        uint8_t tmp[len];
        uint8_t* ptr = (uint8_t*)&value;
    
        // fixup for little endian machines (Intel, Mac Native Silicon, etc...)
        if (!isBigEndian()) {
            for (size_t i = 0; i < len; i++) {
                tmp[sizeof(value) - 1 - i] = ptr[i];
            }
            ptr = tmp;
        }
    
        // enumerate the bits from MSB to LSB on each byte
        for (size_t i = 0; i < len; i++) {
            for (size_t j = 0; j < 8; j++) { // 8 bits in a byte
                unsigned int mask = 0x80 >> j;
                s += (ptr[i] & mask) ? "1" : "0";
            }
        }
        return s;
    }
    
    int main() {
        const double value = 5.1;
        std::string s = numberToBitString(value);
        std::cout << value << " = " << s << "\n";
        return 0;
    }
    
    

    The above will print out:

    5.1 = 0100000000010100011001100110011001100110011001100110011001100110
    

    Change this line in main:

    const double value = 5.1;
    

    To be a float declaration:

    const float value = 5.1f;
    

    And it will print only 32 bits:

    5.1 = 01000000101000110011001100110011