When I try to convert a float to an unsigned char array and then back to a float, I'm not getting the original float value. Even when I look at the bits of the float array, I'm seeing a bunch of different bits set than what were set originally.
Here is an example that I made in a Qt Console application project.
Edit: My original code below contains some mistakes that were pointed out in the comments, but I wanted to make my intent clear so that it doesn't confuse future visitors who visit this question.
I was basically trying to shift the bits and OR them back into a single float, but I forgot the shifting part. Plus, I now don't think you can do bitwise operations on floats. That is kind of hacky anyway. I also thought the std::bitset constructor took in more types in C++11, but I don't think that's true, so it was implicitly being cast. Finally, I should've been using reinterpret_cast instead when trying to cast to my new float.
#include <QCoreApplication>
#include <iostream>
#include <bitset>
int main(int argc, char *argv[])
{
QCoreApplication a(argc, argv);
const float f = 3.2;
unsigned char b[sizeof(float)];
memcpy(b, &f, sizeof(f));
const float newF = static_cast<float>(b[0] | b[1] | b[2] | b[3]);
std::cout << "Original float: " << f << std::endl;
// I expect "newF" to have the same value as "f"
std::cout << "New float: " << newF << std::endl;
std::cout << "bitset of original float: " << std::bitset<32>(f) << std::endl;
std::cout << "bitset of combined new float: " << std::bitset<32>(newF) << std::endl;
std::cout << "bitset of each float bit: " << std::endl;
std::cout << " b[0]: " << std::bitset<8>(b[0]) << std::endl;
std::cout << " b[1]: " << std::bitset<8>(b[1]) << std::endl;
std::cout << " b[2]: " << std::bitset<8>(b[2]) << std::endl;
std::cout << " b[3]: " << std::bitset<8>(b[3]) << std::endl;
return a.exec();
}
Here is the output from the code above:
Original float: 3.2
New float: 205
bitset of original float: 00000000000000000000000000000011
bitset of combined new float: 00000000000000000000000011001101
bitset of each float bit:
b[0]: 11001101
b[1]: 11001100
b[2]: 01001100
b[3]: 01000000
A previous answer and comment that has been deleted (not sure why) led me to use memcpy.
const float f = 3.2;
unsigned char b[sizeof(float)];
memcpy(b, &f, sizeof(f));
float newF = 0.0;
memcpy(&newF, b, sizeof(float));