I wrote a simple divide
function in C#:
private string divide(int a, int b)
{
return string.Format("Result: {0}", a / b);
}
Calling MessageBox.Show(divide(3, 0))
results in, as you would expect, a DivideByZeroException
.
So I decided to typecast a
into a float (to get a non-whole-number return value), like so:
private string divide(int a, int b)
{
return string.Format("Result: {0}", (float)a / b);
}
Oddly enough, this now shows me Result: Infinity.
This seems like a bug to me, although I could be mistaken. Is it because the result is now a float, and it's seen as essentially the return value of 3 / 1 x 10^-99999
or something similar?
I'm quite flabbergasted at this result.
This is the behavior when you convert int
to float
. This has been taken from the MSDN documentation:
Dividing a floating-point value by zero will result in either positive infinity, negative infinity, or Not-a-Number (NaN) according to the rules of IEEE 754 arithmetic. Floating-point operations never throw an exception. For more information, see Single and Double.