How does a C program determine, at RUN time (not compile time), whether it's running on Little-Endian or Big-Endian CPU?
The reason why it must be "run-time" check, not "complie-time", is because I'm building the program in MAC OSX's Universal Binary format, using my MAC with Intel-CPU. And this program is expected to run on both Intel and Power-PC CPU's. ie, through the Universal Binary format on MAC, I wanna build a program using Intel-CPU and run it under PPC CPU.
The logic in my program that needs the CPU check is the host-to-network-byte-order-changing function for 64bit integers. Right now I have it blindly swap the byte orders, which works ok on Intel-CPU, but breaks on PPC. Here's the C function:
unsigned long long
hton64b (const unsigned long long h64bits) {
// Low-order 32 bits in front, followed by high-order 32 bits.
return (
(
(unsigned long long)
( htonl((unsigned long) (h64bits & 0xFFFFFFFF)) )
) << 32
)
|
(
htonl((unsigned long) (((h64bits) >> 32) & 0xFFFFFFFF))
);
}; // hton64b()
Any better way of doing this in a cross-platform way?
Thanks
#ifdef LITTLE_ENDIAN do it little endian way #else do it big endian way #endif.
This is compile time, but the source for fat binaries gets compiled seperatly for each architecture , this is not a problem.
The last approach is to simply do the unpacking of the individual bytes in a way that's not sensible to the host endian - you only need to know the order the bytes are in from the source.
uint64_t unpack64(uint8_t *src)
{
uint64_t val;
val = (uint64_t)src[0] << 56;
val |= (uint64_t)src[1] << 48;
val |= (uint64_t)src[2] << 40;
val |= (uint64_t)src[3] << 32;
val |= (uint64_t)src[4] << 24;
val |= (uint64_t)src[5] << 16;
val |= (uint64_t)src[6] << 8;
val |= (uint64_t)src[7] ;
return val;
}