I was curious, so I ran a couple of tests to see how .NET handles overflow (I couldn't find it documented anywhere). I'd almost wish they spit out overflow errors instead of the results because honestly these results are just bizarre:
Int32.MaxValue + Int32.MaxValue = -2
I understand that it wraps around, but why do that instead of throwing an OverflowException? Isn't that what "unchecked" is for... to ignore overflows? I'm kind of baffled as to what unchecked is for now, especially since I've seen it used for creating hash values.
Double.PositiveInfinity + Double.NegativeInfinity = Double.NaN
Another oddity. 1 + -1 = 0. 100 + -100 = 0. So why is Infinity + -Infinity = NaN?
Double.PositiveInfinity / Double.PositiveInfinity = Double.NaN
Again, why the oddity? I'd figure this should be 1 or possibly 0 (b/c the limit of x / Infinity = 0). In fact... Double.MaxValue / Double.PositiveInfinity = 0
...
Double.PositiveInfinity / 0 = Infinity
What!? No DivideByZeroException!?
Double.MaxValue + Double.MaxValue = Infinity
Yea, this one does not throw an OverflowException, but also does NOT wrap around? So I guess not all primitive types behave like int does. Oddly enough, I can do things such as Double.MaxValue + 1 = 1.79769313486232E+308
. So adding beyond the MaxValue of a double is possible (probably loses precision?), but after some unknown number (it can probably be figured out - or has already) it loses its ability to display a valid number and gives back Infininty?
Well, the rest is kind of repetitive. I'm just wondering why these operate the way they do? Especially the Double operators. It was very unexpected for me to be able to add beyond the MaxValue.
checked
will fix that; unchecked
is the default behaviour