Search code examples
.netfloating-pointnumerical

Dealing with floating point errors in .NET


I'm working on a scientific computation & visualization project in C#/.NET, and we use doubles to represent all the physical quantities. Since floating-point numbers always include a bit of rounding, we have simple methods to do equality comparisons, such as:

static double EPSILON = 1e-6;

bool ApproxEquals(double d1, double d2) {
    return Math.Abs(d1 - d2) < EPSILON;
}

Pretty standard.

However, we constantly find ourselves having to adjust the magnitude of EPSILON as we encounter situations in which the error of "equal" quantities is greater than we had anticipated. For example, if you multiply 5 large doubles together and then divide 5 times, you lose a lot of accuracy. It's gotten to the point where we can't make EPSILON too much larger or else it's going to give us false positives, but we still get false negatives as well.

In general our approach has been to look for more numerically-stable algorithms to work with, but the program is very computational and there's only so much we've been able to do.

Does anyone have any good strategies for dealing with this problem? I've looked into the Decimal type a bit, but am concerned about performance and I don't really know enough about it to know if it would solve the problem or only obscure it. I would be willing to accept a moderate performance hit (say, 2x) by going to Decimal if it would fix these problems, but performance is definitely a concern and since the code is mostly limited by floating-point arithmetic, I don't think it's an unreasonable concern. I've seen people quoting a 100x difference, which would definitely be unacceptable.

Also, switching to Decimal has other complications, such as general lack of support in the Math library, so we would have to write our own square root function, for example.

Any advice?

EDIT: by the way, the fact that I'm using a constant epsilon (instead of a relative comparison) is not the point of my question. I just put that there as an example, it's not actually a snippit of my code. Changing to a relative comparison wouldn't make a difference for the question, because the problem arises from losing precision when numbers get very big and then small again. For example, I might have a value 1000 and then I do a series of calculations on it that should result in exactly the same number, but due to loss of precision I actually have 1001. If I then go to compare those numbers, it doesn't matter much if I use a relative or absolute comparison (as long as I've defined the comparisons in a way that are meaningful to the problem and scale).

Anyway, as Mitch Wheat suggested, reordering of the algorithms did help with the problems.


Solution

  • This is not a problem unique to .NET. The strategy to reduce loss of precision is to re-order calculations so that you multiply large quantities times small quantities and add/subtract similiar sized quantities (without changing the nature of the calculation, obviously).

    In your example, rather than multiply 5 large quantities together and then divide by 5 large quantities, re-order to divide each large quantity by one of the divisors, and then multiply these 5 partial results.

    Of interest? (if you haven't already read): What Every Computer Scientist Should Know About Floating-Point Arithmetic