So I have the following functions:
void SomeClass::Read(__in uint32_t& Res)
{
ReadBinary<uint32_t>(Res);
}
template < typename T >
void SomeClass::ReadBinary(T& Res)
{
size_t sBytesToRead = sizeof(T);
Res = 0;
std::vector<char> vcBuffer(sBytesToRead);
m_Fstream.read(&vcBuffer[0], sBytesToRead);
// little endian
if (m_Endian == LITTLE_ENDIAN)
for (int n = sBytesToRead-1; n >= 0; n--)
Res = (Res << 8) + vcBuffer[n];
// big endian
else
for (unsigned n = 0; n < sBytesToRead; n++)
Res = (Res << 8) + vcBuffer[n];
}
void SomeClass::DetectEndian()
{
int num = 1;
if (*(char *)&num == 1)
m_Endian = LITTLE_ENDIAN;
else
m_Endian = BIG_ENDIAN;
}
Those functions are designed to detect system endianness and read binary integers from file.
for some reason I don't get the expected values. How do I know? I've written a simple python script:
mode = 'rb'
with open(filename, mode) as f:
print struct.unpack('i', f.read(4))[0]
print struct.unpack('i', f.read(4))[0]
It seems like when the integer is small to be contain in a single byte the value of both programs are the same. However, once the integer consist of multiple bytes I get different values.
this leads me to think that I have a problem with those lines:
// little endian
if (m_Endian == LITTLE_ENDIAN)
for (int n = sBytesToRead-1; n >= 0; n--)
Res = (Res << 8) + vcBuffer[n];
Any ideas?
If char
is signed in your compiler (and most likely it is) then it can be negative and so you'll substract a value instead of adding it, like demonstrated by this demo.
Use unsigned char
or uint8_t
instead of char
.