Search code examples
cbinarycharunsigned-char

why is 00000000 - 00000001 = 11111111 in C unsigned char data type?


I observed that, when a unsigned char variable stores the value 0 (000000002) and it gets decremented by 1 (000000012), the variable value turns into 255 (111111112), which is the highest value that a unsigned char variable can hold.

My question is: why 000000002 - 000000012 turns into 111111112? (I want to see the arithmetic behind it)

The C code in which i observed it was this one:

#include <stdio.h>

main(){

  unsigned char c = 0;

  unsigned char d = c - 1;

  printf("%d\n%d", c, d);

}

When it runs, the following output is shown:

0
255

Solution

  • See here:

    Unsigned integer arithmetic is always performed modulo 2n where n is the number of bits in that particular integer. E.g. for unsigned int, adding one to UINT_MAX gives ​0​, and subtracting one from ​0​ gives UINT_MAX.

    So in your example, since unsigned char is usually 8 bit, you get 28-1 = 255.