Search code examples
cbinarydecimalbitwise-operatorsand-operator

can't understand showbit() function in bitwise operatrers in C


/* Print binary equivalent of characters using showbits( ) function */

#include <stdio.h>

void showbits(unsigned char);
    
int main() {
    unsigned char num;
    
    for (num = 0; num <= 5; num++) {
        printf("\nDecimal %d is same as binary ", num);
        showbits(num);
    }

    return 0;
}

void showbits(unsigned char n) {
    int i;
    unsigned char j, k, andmask;
    
    for (i = 7; i >= 0; i--) {
        j = i;
        andmask = 1 << j;
        k = n & andmask;
        k == 0 ? printf("0") : printf("1");
    }
}

Sample numbers assigned for num : 0,1,2,3,4 ...

Can someone explain in detail what is going on in k = n & andmask? How can n, which is a number such as 2, be an operand of the same & operator with andmask, eg 10000000, since 2 is a 1-digit value and 10000000 is a multiple-digit value?

Also why is char is used for n and not int?


Solution

  • Let's walk through it.

    Assume n is 2. The binary representation of 2 is 00000010.

    The first time through the loop j is equal to 7. The statement

    andmask = 1 << j;
    

    takes the binary representation of 1, which is 00000001, and shifts it left seven places, giving us 10000000, assigning to andmask.

    The statement

    k = n & andmask;
    

    performs a bitwise AND operation on n and andmask:

      00000010
    & 10000000
      --------
      00000000
    

    and assigns the result to k. Then if k is 0 it prints a "0", otherwise it prints a "1".

    So, each time through the loop, it's basically doing

    j   andmask          n     result    output
    -  --------   --------   --------    ------
    7  10000000 & 00000010   00000000       "0"
    6  01000000 & 00000010   00000000       "0"
    5  00100000 & 00000010   00000000       "0"
    4  00010000 & 00000010   00000000       "0"
    3  00001000 & 00000010   00000000       "0"
    2  00000100 & 00000010   00000000       "0"
    1  00000010 & 00000010   00000010       "1"    
    0  00000001 & 00000010   00000000       "0"
    

    Thus, the output is "00000010".

    So the showbits function is printing out the binary representation of its input value. They're using unsigned char instead of int to keep the output easy to read (8 bits instead of 16 or 32).

    Some issues with this code:

    • It assumes unsigned char is always 8 bits wide; while this is usually the case, it can be (and historically has been) wider than this. To be safe, it should be using the CHAR_BIT macro defined in limits.h:
      #include <limits.h>
      ...
      for ( i = CHAR_BIT - 1; i >= 0; i++ )
      {
        ...
      }
    • ?: is not a control structure and should not be used to replace an if-else - that would be more properly written as
      printf( "%c", k ? '1' : '0' );
      
      That tells printf to output a '1' if k is non-zero, '0' otherwise.