Search code examples
c#doubleprecisiondivide

Dividing by 100 precision


I'm working on something and I've got a problem which I do not understand.

double d = 95.24 / (double)100;
Console.Write(d); //Break point here

The console output is 0.9524 (as expected) but if I look to 'd' after stoped the program it returns 0.95239999999999991.

I have tried every cast possible and the result is the same. The problem is I use 'd' elsewhere and this precision problem makes my program failed.

So why does it do that? How can I fix it?


Solution

  • http://en.wikipedia.org/wiki/Floating_point#Accuracy_problems

    The short of it is that a floating-point number is stored in what amounts to base-2 scientific notation. There is an integer significand understood to have one place in front of a decimal point, which is raised to an integer power of two. This allows for the storage of numbers in a relatively compact format; the downside is that the conversion from base ten to base 2 and back can introduce error.

    To mitigate this, whenever high precision at low magnitudes is required, use decimal instead of double; decimal is a 128-bit floating point number type designed to have very high precision, at the cost of reduced range (it can only represent numbers up to +- 79 nonillion, 7.9E28, instead of double's +- 1.8E308; still plenty for most non-astronomical, non-physics computer programs) and double the memory footprint of a double.