double to_decimalIEEE754(char *str)
{
int n = strlen(str);
double exp = 0;
double significand = 0;
double sign = ((char)str[0] - '0') ? -1 : 1;
for (int i = 1, j = 11, k = 12, counter1 = 0, counter2 = 1; i < n; i++)
{
if (i >= 1 && i <= 11)
{
exp += ((char)str[j] - '0') * pow(2, counter1);
counter1++;
j--;
}
else
{
significand += ((char)str[k] - '0') * pow(2, -counter2);
counter2++, k++;
}
}
significand += 1;
return significand * sign * pow(2, exp - 1023);
}
...
// In the real code this is in main obviously
printf("Insert the number you want to convert: ");
scanf("%s", str);
double res = to_decimalIEEE754(str);
printf("%lf\n", res);
I use the 64 bit IEEE conversion, the result i get when i put the binary is correct, but it only has float precision when i print it and i don't know why.
printf("%lf\n", res);
prints with a fixed precision of 6 decimal places after the decimal point. This occurs if the argument passed is float
, double
or any finite value. The l
is optional.
Typical double
needs up to 17 significant *1decimal digits to distinguish it from other double
.
Use:
printf("%.17g\n", res); // Assumes common double
printf("%.*g\n", DBL_DECIMAL_DIG, res); // Portable, uses exponential for large and tiny values
printf("%.*e\n", DBL_DECIMAL_DIG-1, res); // Portable, always uses exponential notation.
printf("%a\n", res); // Portable, yet in hex
See Printf width specifier to maintain precision of floating-point value
*1 significant is the number of leading decimal digits where the first one is non-zero, regardless of the location of the decimal point. 12345678901.234567 and 0.00012345678901234567 both have 17 significant decimal digits.