I am getting precision loss when converting a big double (17+ digits) number to integer.
#include <stdio.h>
int main() {
int n = 20;
double acum = 1;
while (n--) acum *= 9;
printf("%.0f\n", acum);
printf("%llu\n", (unsigned long long)acum);
return 0;
}
The output of this code is:
12157665459056929000
12157665459056928768
I can't use unsigned long long for the calculations because this is just a pseudo code and I need the precision on the real code, where divisions are included.
If I increase the decimals the first output becomes, for e.g 12157665459056929000.0000000000. I've tried round(acum) and trunc(acum) and in both cases the result were the same as the second output. Shouldn't they be equal to the first??
I know float has only 6 decimals precision and double has about 17. But what's wrong with the digits?!?
Actually, when I change the acum's type to unsigned long long like:
unsigned long long acum = 1;
the result is:
12157665459056928801
When I use Python to calculate the accurate answer:
>>9**20
12157665459056928801L
You see?
12157665459056929000
is not an accurate answer at all and is actually an approximation of the accurate.
Then I change the code like this:
printf("%llu\n", (unsigned long long)1.2157665459056929e+019);
printf("%llu\n", (unsigned long long)1.2157665459056928e+019);
printf("%llu\n", (unsigned long long)1.2157665459056927e+019);
printf("%llu\n", (unsigned long long)1.2157665459056926e+019);
And result is:
12157665459056928768
12157665459056928768
12157665459056926720
12157665459056926720
In fact 19 digits is exceeding the numeric digit limit of cpp and the result of converting such a big number is unexpectable and unsafe.