I'm having trouble getting the following code to work correctly. Using an online IEEE-754 converter, I wrote out (by hand) to the testData.txt file that is read with the bit string that should signify the floating point number 75.5; the actual cout.write does show that the bit string is as I expect as well. However, when I try to coerce the char* into a float using a union (as I have seen is a typical way to accomplish this conversion) the resulting float is not the number I expect.
#include<climits>
#include<iostream>
#include<fstream>
#include<bitset>
int main( int, char** )
{
std::ifstream inputFile( "testData.txt", std::ios_base::in | std::ios_base::binary );
if( !inputFile ) std::cout << "Failed to open input file!" << std::endl;
char buffer[ CHAR_BIT * sizeof(float) ];
inputFile.read( buffer, CHAR_BIT * sizeof(float) );
std::cout << "cout.write of input from file = ";
std::cout.write( buffer, CHAR_BIT * sizeof(float) );
std::cout << std::endl;
union { float f; char* c; } fToCharStarUnion;
fToCharStarUnion.c = buffer;
std::bitset< sizeof(float) * CHAR_BIT > bits( std::string( fToCharStarUnion.c ) );
std::cout << "fToCharStarUnion.f = " << fToCharStarUnion.f << " bits = " << bits << std::endl;
inputFile.close();
return 0;
}
The return result of running this is:
cout.write of input from file = 01000010100101110000000000000000
fToCharStarUnion.f = -1.61821e+38 bits = 01000010100101110000000000000000
Is there something fundamental I am not doing which will make this work correctly?
You are translating the ASCII into bits using the constructor of bitset
. That causes your decoded bits to be in the bitset
object rather than the union
. To get raw bits out of a bitset, use the to_ulong
method:
#include<climits>
#include<iostream>
#include<fstream>
#include<bitset>
int main( int, char** )
{
std::ifstream inputFile( "testData.txt",
std::ios_base::in | std::ios_base::binary );
if( !inputFile ) std::cout << "Failed to open input file!" << std::endl;
char buffer[ CHAR_BIT * sizeof(float) ];
inputFile.read( buffer, CHAR_BIT * sizeof(float) );
std::cout << "cout.write of input from file = ";
std::cout.write( buffer, CHAR_BIT * sizeof(float) );
std::cout << std::endl;
union {
float f[ sizeof(unsigned long)/sizeof(float) ];
unsigned long l;
} funion;
funion.l = std::bitset<32>( std::string( buffer ) ).to_ulong();
std::cout << "funion.f = " << funion.f[0]
<< " bits = " << std::hex <<funion.l << std::endl;
inputFile.close();
return 0;
}
This generally assumes that your FPU operates with the same endianness as the integer part of your CPU, and that sizeof(long) >= sizeof(float)
… less guaranteed for double
, and indeed the trick is harder to make portable for 32-bit machines with 64-bit FPUs.
Edit: now that I've made the members of the union equal sized, I see that this code is sensitive to endianness. The decoded float
will be in the last element of the array on a big-endian machine, first element on little-endian. :v( . Maybe the best approach would be to attempt to give the integer member of the union exactly as many bits as the FP member, and perform a narrowing cast after getting to_ulong
. Very difficult to maintain the standard of portability you seemed to be shooting for in the original code.