I am running a for loop like so:
for var i: Float = 1.000; i > 0; i -= 0.005 {
println(i)
}
and I have found that after i
has decreased past a certain value instead of decreasing by exactly 0.005
, it decreases by ever so slightly less then 0.005
, so that when it reaches the 201 iteration, i
is not 0
but rather something infinitesimally close 0
, and so the for
loop runs. The output is as follows:
1.0
0.995
0.99
0.985
...
0.48
0.475001
0.470001
...
0.0100008 // should be 0.01
0.00500081 // should 0.005
8.12113e-07 // should be 0
My question is, first of all, why is this happening, and second of all what can I do so that i
always decreases by 0.005
so that the loop does not run on the 201 iteration?
Thanks a lot,
bigelerow
The Swift Floating-Point Number documentation states:
Note
Double has a precision of at least 15 decimal digits, whereas the precision of Float can be as little as 6 decimal digits. The appropriate floating-point type to use depends on the nature and range of values you need to work with in your code. In situations where either type would be appropriate, Double is preferred.
In this case, it looks like the error is on the order of 4.060564999999999e-09
in each subtraction, based on the amount left over after 200 subtractions. Indeed changing Float to Double reduces the precision such that the loop runs until i = 0.00499999999999918
when it should be 0.005
.
That is all well and good, however we still have the problem of construction a loop that will run until i
becomes zero. If the amount that you reduce i
by remains constant throughout the loop, one only slightly unfortunate work around is:
var x: Double = 1
let reduction = 0.005
for var i = Int(x/reduction); i >= 0; i -= 1, x = Double(i) * reduction {
println(x)
}
In this case your error won't compound since we are using an integer to index how many reductions we need to reach the current x
, and thus is independent of the length of the loop.