I've been developing a game in C# which currently uses floating points for some calculations and arithmetic. This game will feature networking functionality and a basic replay system which keeps track of inputs and basic player actions over time. I think these features require me to have every important core mechanic be deterministic. Due to the supposed indeterministic attributes of floating point numbers, I have gone through some resources about fixed-point numbers in order to provide myself with an alternative to floating point.
I understand many of the core concepts of fixed-point due to a variety of very well documented online resources on the matter. However, I'm unsure of whether or not I should use a 32bit type (int) or 64bit type (long) for the raw-value of the fixed-point class.
I would like to have the following basic features for my class:
My assumption is that it would be best to use a long as it will give me more decimal accuracy, but I am worried about potential roadblocks that may come along the way. For example, would using a long provide issues when targeting 32bit or running on 32bit machines? Are ints ultimately more compatible than long when it comes to potential hardware configurations? Because games are performance heavy, is there a large performance loss when switching from float to long based fixed-point numbers?
It seems like a silly question, but I guess I'm wondering if I should use types based off the lowest common denominator of cpu architecture that I expect my program to run on or are these concerns typically handled by the compiler during compilation? Will linux or mac osx handle long calculations differently than a windows machine?
The type you use is irrelevant, with regards to the platform, as types are types are types, in C#. In other words, a long is always 64 bits, no matter what platform you're on. It's a guarantee of C#.
However, the real problem is going to be precision and scale.. When doing fixed-point math, you're going to have to pick the precision you want to use. That's an easy problem. What's not easy is scaling. If you have numbers that will exceed the maximum value of your chosen type (don't forget to include the decimals in this consideration), then you're broken out of the gate.
Have you looked into the decimal
type?
decimal
is still a floating-point type, but it is floating-point decimal, rather than IEEE754 binary floating-point, and thus is capable of representing any base-10 number you throw at it, so long as it fits in the scale and precision of the decimal
type.
See these links for information on the decimal
type:
decimal
comes with some performance considerations, however, and may not be the best choice for a game, if performance is critical. For a simple 2D scroller, you'd be fine, but it's probably not ideal for anything beyond that.