Search code examples
cbit-manipulation64-bit

Can someone explain what maxBit is?


I am trying to understand what is maxBit in the following and what it represents?

When I print min and max, I get numbers that make no sense to me.

Thank you.

#include <stdio.h>
#include <math.h>

int main() {
union {double a; size_t b;} u; 
u.a = 12345; 
size_t max = u.b; 
u.a = 6; 
size_t min = u.b;
int maxBit = floor(log(max-min) / log(2));
printf("%d",maxBit);
return 0;
}

Solution

  • This code appears to be using a horrible kludge. I am one of the more welcoming participants here regarding tolerating code that uses compiler extensions or other things beyond the C standard, but this code does simply unnecessary things for no apparent good purpose. It relies on size_t being 64 bits. It may be 64 bits in some specific C implementation this was written for, but that is not portable, and C implementations that use 64 bits are generally modern, and modern implementations ought to support the uint64_t of <stdint.h>, which would be an appropriate type for this. So better code would have used uint64_t.

    Unless there is some quite surprising motivation for this and other issues in the code, it is low quality, bad code. Do not use it, and regard any code from the same source with skepticism.

    That said, the code likely assumes the IEEE-754 binary64 is used for double, and max-min gives the difference between the representations of 12345 and 6. log(max-min) / log(2) finds the base-two-logarithm of max-min, and the integer portion of that will be the index of the highest bit that changed. For 12345, the exponent field is 1036. For 6, the exponent field is 1025. The difference is 11 (binary 1011), in which the first set bit is bit 3 of the exponent field. The field runs from bits 62 to 52 in the binary64 format, so bit 3 in the exponent field is bit 55 (52+3) in the whole 64 bits of the representation. So maxBit will be 55. However, there is no apparent significance to this. There is no great value in knowing that bit 55 is the highest bit set in the difference between the representations of 12345 and 6. I am familiar with a variety of IEEE-754 bit-twiddling hacks, and I do not recognize this. I expect nobody can tell you much more about this without context, such as where the code came from or how it is used.