I'm reading the ECMAScript abstract operation ToString. In Step 5 (m is the number we want to convert to a string):
- Otherwise, let n, k, and s be integers such that k ≥ 1, 10^(k−1) ≤ s < 10^k, the Number value for s × 10^(n−k) is m, and k is as small as possible. Note that k is the number of digits in the decimal representation of s, that s is not divisible by 10, and that the least significant digit of s is not necessarily uniquely determined by these criteria.
I can't figure out in which case the least significant digit of s would not be uniquely determined. Any example?
The answer is - as always with floating point math - rounding at the edge of the available precision.
Let's take s = 7011750883285835
, k=16
and some n
(let's say n=0
). Now determining m
, we'll get the floating point number 0x3FE67006BD248487 (somewhere around 0.70117508832858355…
). However, we could also have chosen s = 7011750883285836
and it would be equal to m
as well.
The point is that if we had converted the double to decimal representation exactly, we would've gotten 0.701175088328583551167128007364
. That is much longer than necessary (and implies higher precision than available), so the ToString
algorithm specifies to make a decimal representation of m
with the least amount of siginificant digits ("k
is as small as possible") that still parses to the number m
we want. Sometimes, we can both round up or round down to get that, and both ways are allowed.