We're working on an RTS game engine using C# and .NET Core. Unlike most other real-time multiplayer games, RTS games tend to work by synchronizing player inputs to other players, and running the game simulation in lockstep on all clients at the same time. This requires game logic to be deterministic so that games don't get out of sync.
One potential source of non-determinism are floating point operations. From what I've gathered the primary issue is with the old x87 FPU instructions - they use an internal 80-bit register, while IEEE-754 floating point values are 32-bit or 64-bit, so values are truncated when moved from registers to memory. Small changes to code and/or the compiler can result in truncation happening at different times, resulting in slightly different results. Non-determinism can also be caused by accidentally using different FP rounding modes, though if I understood correctly this is mostly a solved issue.
I've also gotten the impression that SSE(2) instructions do not suffer from the truncation issue, as they perform all floating point arithmetic in 32- or 64-bit without a higher precision register.
Finally, as far as I know the CLR uses x87 FPU instructions on x86 (or that was at least the case before RyuJIT), and SSE instructions on x86-64. I'm not sure if that means for all or most operations.
Support for accurate single precision math has recently been added to .NET Core, if that matters.
But when researching whether or not floating point can be used deterministically in .NET there are a lot of answers that say no, although they mostly concern older versions of the runtime.
So, if CoreCLR uses SSE FP instructions on x86-64, does that mean that it doesn't suffer from the truncation issues, and/or any other FP-related non-determinism? We are shipping .NET Core with the engine so every client would use the same runtime, and we would require that the players use exactly the same version of the game client. Limiting the engine to only work on x86-64 (on PC) is also an acceptable limitation.
If the runtime still uses x87 instructions with unreliable results, would it make sense to use a software float implementation (like the one linked in an answer above) for computations concerning single values, and accelerate vector operations with SSE using the new hardware intrinsics? I've prototyped this and it seems to be work, but is it unnecessary?
If we can just use normal floating point operations, is there anything we should avoid, like trigonometric functions?
Finally, if everything is OK so far how would this work when different clients use different operating systems or even different CPU architectures? Do modern ARM CPUs suffer from the 80-bit truncation issue, or would the same code run identically to x86 (if we exclude trickier stuff like trigonometry), assuming the implementation has no bugs?
So, if CoreCLR uses SSE FP instructions on x86-64, does that mean that it doesn't suffer from the truncation issues, and/or any other FP-related non-determinism?
If you stay on x86-64 and you use the exact same version of CoreCLR everywhere, it should be deterministic.
If the runtime still uses x87 instructions with unreliable results, would it make sense to use a software float implementation [...] I've prototyped this and it seems to be work, but is it unnecessary?
It could be a solution to workaround the JIT issue, but you will likely have to develop a Roslyn analyzer to make sure that you are not using floating point operations without going through these... or to write an IL rewriter that would perform this for you (but that would make your .NET assemblies arch dependent... which could be acceptable depending on your requirements)
If we can just use normal floating point operations, is there anything we should avoid, like trigonometric functions?
As far as I know, CoreCLR is redirecting math functions to the compiler libc, so as long as you stay on the same version, same platform, it should be fine.
Finally, if everything is OK so far how would this work when different clients use different operating systems or even different CPU architectures? Do modern ARM CPUs suffer from the 80-bit truncation issue, or would the same code run identically to x86 (if we exclude trickier stuff like trigonometry), assuming the implementation has no bugs?
You can have some issues not related to extra precision. For example, for ARMv7, subnormal floats are flushed to zero while ARMv8 on aarch64 will keep them.
So assuming that you are staying on ARMv8, I don't know well if the JIT CoreCLR for ARMv8 is behaving in that regard; you should probably ask on GitHub directly. There is still also the behavior of the libc that would likely break deterministic results.
We are working exactly at solving this at Unity on our "burst" compiler to translate .NET IL to native code. We are using LLVM codegen across all machines, disabling a few optimizations that could break determinism (so here, overall we can try to guarantee the behavior of the compiler across the platforms), and we are also using the SLEEF library to provide deterministic calculation of mathematical functions (see for example https://github.com/shibatch/sleef/issues/187)… so it is possible to do it.
In your position, I would probably try to investigate if CoreCLR is really deterministic for plain floating point operations between x64 and ARMv8… And if it looks okay, you could call these SLEEF functions instead of System.Math
and it could work out of the box, or propose CoreCLR to switch from libc to SLEEF.