Search code examples
c++comparisonbytebit

Directly compare Two 8 Bit (1 Byte) values in C++


I'm wondering, is there a way to compare 8 bit/1 byte values similarly to the way that we could compare an int?

For example:

// Start with these as false
bool int_comp = false;
bool byte_comp = false;

// Set the ints
int a_int = 128;
int b_int = 128;

// Set the bytes
char a_byte = 0xC0; // 11000000
char b_byte = 0xC0; // 11000000

// This comparison works
if (a_int == b_int)
   int_comp = true;

// This comparison does not work, however
if (a_byte == b_byte)
   byte_comp = true;

In this case the byte comparison does not work.. Is there a way to compare bytes in a way similar to how we compare integers?


Edit: As it turns out this does work- thank you for the replies. I was doing something extra in my code that I did not capture here that was causing the issues. Essentially I was using a static_cast and thought that it would work but it did not.

Here is what I was doing:

// Start with these as false
bool int_comp = false;
bool byte_comp = false;

// Set the ints
int a_int = 128;
int b_int = 128;

// Set the bytes
int8_t a_byte = 0xC0; // 11000000
char b_byte = 0xC0; // 11000000

// This comparison works
if (a_int == b_int)
   int_comp = true;

// This comparison does not work, however
//
// Turns out that using static_cast here did not do what I thought it would do,
// even though printing out the bits using std::bitset showed that the bits
// of the casted value were the same.
//
if (a_byte == static_cast<int8_t>(b_byte))
   byte_comp = true;

Solution

  • I'm wondering, is there a way to compare 8 bit values similarly to the way that we could compare an int?

    Yes, there is. Example:

    std::uint8_t a_octet = 0xC0; // 11000000
    std::uint8_t b_octet = 0xC0; // 11000000
    if (a_octet == b_octet)
    

    ... is there a way to compare 1 byte values ...

    Yes, there is. Example:

    unsigned char a_byte = 0xC0; // 11000000
    unsigned char b_byte = 0xC0; // 11000000
    if (a_byte == b_byte)
    

    char a_byte = 0xC0; // 11000000
    

    This doesn't work like you would expect on systems where the size of byte is 8 bits and char is a signed type because in such case 0xC0 is outside of representable values. The largest representable value would be 0x7f.

    Otherwise your example works.