Search code examples
.netmathmathematical-optimization

Single Precision Math Operations in .NET?


The .NET framework's Math functions mostly operate on double precision floats, there are no single precision (float) overloads. When working with single precision data in a high performance scenario this results in unnecessary casting and also calculating functions with more precision than is required, thus performance is affected to some degree.

Is there any way of avoiding some of this additional CPU overhead? E.g. is there an open source math library with float overloads that calls underlying FPU instructions directly? (My understanding is that this would require support in the CLR). And actually I'm not sure if modern CPUs even have single precision instructions.

This question has been partly inspired by this question about optimizing a sigmoid function:

Math optimization in C#


Solution

  • To my knowledge, the .NET Framework does not include an API with direct access to math intrinsics. The Mono libraries do include working support for intrinsics, but I'm not sure the state of them.

    [Edit: This paragraph is commentary on why you don't see overloads for float parameters.] One trouble is the CLI evaluation stack (per ECMA-335) does not distinguish between the float and double types. A valid implementation could treat everything as a double for math operations, but I imagine the CLR (Microsoft's implementation of the CLI) performs optimizations for single-precision arithmetic where it can.

    I think it's somewhat unfortunate that the issue of intrinsics (in particular SIMD extensions) hasn't been addressed [in a released product] yet. My outsider's-guess is support for intrinsics would require significant alterations to the VM that pose unacceptable risks at this point in the .NET Framework release cycle. The garbage collector (and I think the exception handling mechanisms) is tightly coupled with the register allocator, and supporting intrinsics adds a radical new variable to that area.