Search code examples
c++image-processingfile-handlingdemosaicingraw-file

Reading little endian file 8 bits at a time and performing binary operations on it


I want to read 8 bits at a time from a file in little endian format, I think my cpu is also little endian so I need not to worry about the endianess, right?

what I am reading are numbers, intensity values of RGGB CFA from a RAW12 file.

Here is my code -

  uint8_t first8bits, second8bits, third8bits;
  file.read((char*)&first8bits, sizeof(uint8_t));
  file.read((char*)&second8bits, sizeof(uint8_t));
  file.read((char*)&third8bits, sizeof(uint8_t));
  Red_channel = (first8bits) << 4 | (second8bits & 0xF0) >> 4;
  Green_channel = (second8bits & 0x0F) | (third8bits);

I have seen others reading it 8 bits into a char array and then converting it to a number, how do I do that? Since the machine I am testing the code of is little endian I think I dont need to do byte swap but what if some other person tests this code on big endian machine, how do I find at runtime if the machine is little endian or big endian?

Any help would me much appreciated.


Solution

  • If you're on posix platform, you're likely have <endian.h>

    Which along with those functions http://man7.org/linux/man-pages/man3/htole32.3.html offers macrodefinitions: __LITTLE_ENDIAN , __BIG_ENDIAN, __BYTE_ORDER (likely included from glib/types.h or bits/types.h)

    So you can use preprocessor

    #if __BYTE_ORDER == __LITTLE_ENDIAN
    

    to define templates or types which result in one or another operation if required. Endiannes means that you must watch: a) order of bytes b) order of bit fields in struct. Big endian matches "network" order of bytes, but in case of bit fields, on little endian platform first defined field is the least significant and on big endian - the most significant. Technically that's not defined by standard but this order is "canonized" by kernel code of Linux and other OSes.

    If you're on Windows, you're on little endian platform. On ARM and MIPS things are more complex, as CPU can actually switch its endiannes to a degree. That's the reason why those library functions do exist.

    Bit shift operations would allow to ignore order and can be used IF you will keep in mind that bit shift operation automatically promotes its operand to int.