In VxSim, when I run strtod("1E1000000", 0) (in stdlib.h) it runs very quickly (and returns 1), while strtod("1E1000000000000000000000000000", 0) takes about 20 seconds, then returns 1. Sometimes it is always quick though and returns non-zero values, and then pressing ctrl-c and resetting the shell will make it slow again.
Why does this happen? In other operating systems both are near instantaneous.
I was playing with this more, and also in VxSim, when you run:
strtod("10", 0)
strtod("10", 0)
strtod("10", 0)
strtod("10", 0)
strtod("10", 0)
strtod("10", 0)
strtod("10", 0)
strtod("10", 0)
The last one gives you:
value = 1615516944 = 0x604ad510
I tested this on hardware and it didn't occur, so it might be an error in VxSim.
Also, compiling this doesn't give you that error either. It is only when you type these manually into the VxWorks command line and run them there.
Both numbers are much above the maximum representable finite IEEE 754 double-precision value, which is around 1.79769e+308. There is little reason to convert “1E1000000000000000000000000000” to double-precision, although in a security-sensitive context, that can be a nice Denial-of-Service attack when an attacker is allowed to input a string intended to be converted to floating-point later, and the conversion takes more time than the programmer thought possible.
Since there is little reason to convert “1E1000000000000000000000000000” to double-precision, one of the implementations that you tried did not implement the optimization that would consist in recognizing this case, and actually does a lot of computations(*) before realizing that the number is much larger than what can be represented anyway. Whereas other, smarter implementations detect early that the decimal exponent is so large that the end result can only be +inf
, and return early.
Another interesting input to look for is “0.000<10000 zeroes>1E10000”. An incorrectly “optimized” version may wrongly return +inf
in this case.
Yet another interesting corner case is “0E1000000000000000000”, which shouldn't return anything other than 0.0
.
(*) converting decimal to binary floating-point is more subtle than most people realize. See this series of blog posts.