I was given the below problem and I want to make sure I fully understand before submitting it.
Explain briefly the problem that might arise if the following algorithm is implemented. (Hint: remember the problem of rounding error with floating point numbers)
assign n the value 0.1
while (n is not equal to 1.0)
print the value of n
assign n the value n + 0.00001
end while
I believe the answer is that is does not say what to do if the value of n is equal to 1.0. Am I looking at this right or am I missing something obvious?
The problem is that 0.00001 is not exactly representable in standard floating point so when you keep adding you will reach a number like 0.999999999998 instead of 1.0.
Therefore n will never be equal to 1.0
Therefore the loop will never terminate.