Consider the collection of floating-point numbers of the form 0.xx5
between 0.0
and 1.0
: [0.005, 0.015, 0.025, 0.035, ..., 0.985, 0.995]
I can make a list of all 100 such numbers easily in Python:
>>> values = [n/1000 for n in range(5, 1000, 10)]
Let's look at the first few and last few values to check we didn't make any mistakes:
>>> values[:8]
[0.005, 0.015, 0.025, 0.035, 0.045, 0.055, 0.065, 0.075]
>>> values[-8:]
[0.925, 0.935, 0.945, 0.955, 0.965, 0.975, 0.985, 0.995]
Now I want to round each of these numbers to two decimal places after the point. Some of the numbers will be rounded up; some will be rounded down. I'm interested in counting exactly how many round up. I can compute this easily in Python, too:
>>> sum(round(value, 2) > value for value in values)
50
So it turns out that exactly half of the 100 numbers were rounded up.
If you didn't know that Python was using binary floating-point under the hood, this result wouldn't be surprising. After all, Python's documentation states clearly that the round
function uses round-ties-to-even (a.k.a. Banker's rounding) as its rounding mode, so you'd expect the values to round up and round down alternately.
But Python does use binary floating-point under the hood, and that means that with a handful of exceptions (namely 0.125
, 0.375
, 0.625
and 0.875
), these values are not exact ties, but merely very good binary approximations to those ties. And not surprisingly, closer inspection of the rounding results shows that the values do not round up and down alternately. Instead, each value rounds up or down depending on which side of the decimal value the binary approximation happens to land. So there's no a priori reason to expect exactly half of the values to round up, and exactly half to round down. That makes it a little surprising that we got a result of exactly 50.
But maybe we just got lucky? After all, if you toss a fair coin 100 times, getting exactly 50 heads isn't that unusual an outcome: it'll happen with around an 8% probability. But it turns out that the pattern persists with a higher number of decimal places. Here's the analogous example when rounding to 6 decimal places:
>>> values = [n/10**7 for n in range(5, 10**7, 10)]
>>> sum(round(value, 6) > value for value in values)
500000
And here it is again rounding apparent ties to 8 decimal places after the point:
>>> values = [n/10**9 for n in range(5, 10**9, 10)]
>>> sum(round(value, 8) > value for value in values)
50000000
So the question is: why do exactly half of the cases round up? Or put another way, why is it that out of all the binary approximations to these decimal ties, the number of approximations that are larger than the true value exactly matches the number of approximations that are smaller? (One can easily show that for the case that are exact, we will have the same number of rounds up as down, so we can disregard those cases.)
round
function will both be correctly rounded operations, using the round-ties-to-even rounding mode. While none of this is guaranteed by the language itself, the behaviour is overwhelmingly common, and we're assuming that such a typical machine is being used in this question.It turns out that one can prove something stronger, that has nothing particularly to do with decimal representations or decimal rounding. Here's that stronger statement:
Theorem. Choose a positive integer
n <= 2^1021
, and consider the sequence of lengthn
consisting of the fractions1/2n
,3/2n
,5/2n
, ...,(2n-1)/2n
. Convert each fraction to the nearest IEEE 754 binary64 floating-point value, using the IEEE 754roundTiesToEven
rounding direction. Then the number of fractions for which the converted value is larger than the original fraction will exactly equal the number of fractions for which the converted value is smaller than the original fraction.
The original observation involving the sequence [0.005, 0.015, ..., 0.995]
of floats then follows from the case n = 100
of the above statement: in 96 of the 100 cases, the result of round(value, 2)
depends on the sign of the error introduced when rounding to binary64 format, and by the above statement, 48 of those cases will have positive error, and 48 will have negative error, so 48 will round up and 48 will round down. The remaining 4 cases (0.125, 0.375, 0.625, 0.875
) convert to binary64
format with no change in value, and then the Banker's Rounding rule for round
kicks in to round 0.125
and 0.625
down, and 0.375
and 0.875
up.
Notation. Here and below, I'm using pseudo-mathematical notation, not Python notation: ^
means exponentiation rather than bitwise exclusive or, and /
means exact division, not floating-point division.
Suppose n = 11
. Then we're considering the sequence 1/22
, 3/22
, ..., 21/22
. The exact values, expressed in decimal, have a nice simple recurring form:
1/22 = 0.04545454545454545...
3/22 = 0.13636363636363636...
5/22 = 0.22727272727272727...
7/22 = 0.31818181818181818...
9/22 = 0.40909090909090909...
11/22 = 0.50000000000000000...
13/22 = 0.59090909090909090...
15/22 = 0.68181818181818181...
17/22 = 0.77272727272727272...
19/22 = 0.86363636363636363...
21/22 = 0.95454545454545454...
The nearest exactly representable IEEE 754 binary64 floating-point values are:
1/22 -> 0.04545454545454545580707161889222334139049053192138671875
3/22 -> 0.13636363636363635354342704886221326887607574462890625
5/22 -> 0.2272727272727272651575702866466599516570568084716796875
7/22 -> 0.318181818181818176771713524431106634438037872314453125
9/22 -> 0.409090909090909116141432377844466827809810638427734375
11/22 -> 0.5
13/22 -> 0.59090909090909093936971885341336019337177276611328125
15/22 -> 0.68181818181818176771713524431106634438037872314453125
17/22 -> 0.7727272727272727070868540977244265377521514892578125
19/22 -> 0.86363636363636364645657295113778673112392425537109375
21/22 -> 0.954545454545454585826291804551146924495697021484375
And we see by direct inspection that when converting to float, 1/22, 9/22, 13/22, 19/22 and 21/22 rounded upward, while 3/22, 5/22, 7/22, 15/22 and 17/22 rounded downward. (11/22 was already exactly representable, so no rounding occurred.) So 5 of the 11 values were rounded up, and 5 were rounded down. The claim is that this perfect balance occurs regardless of the value of n
.
For those who might be more convinced by numerical experiments than a formal proof, here's some code (in Python).
First, let's write a function to create the sequences we're interested in, using Python's fractions
module:
from fractions import Fraction
def sequence(n):
""" [1/2n, 3/2n, ..., (2n-1)/2n] """
return [Fraction(2*i+1, 2*n) for i in range(n)]
Next, here's a function to compute the "rounding direction" of a given fraction f
, which we'll define as 1
if the closest float to f
is larger than f
, -1
if it's smaller, and 0
if it's equal (i.e., if f
turns out to be exactly representable in IEEE 754 binary64 format). Note that the conversion from Fraction
to float
is correctly rounded under roundTiesToEven
on a typical IEEE 754-using machine, and that the order comparisons between a Fraction
and a float
are computed using the exact values of the numbers involved.
def rounding_direction(f):
""" 1 if float(f) > f, -1 if float(f) < f, 0 otherwise """
x = float(f)
if x > f:
return 1
elif x < f:
return -1
else:
return 0
Now to count the various rounding directions for a given sequence, the simplest approach is to use collections.Counter
:
from collections import Counter
def round_direction_counts(n):
""" Count of rounding directions for sequence(n). """
return Counter(rounding_direction(value)
for value in sequence(n))
Now we can put in any integer we like to observe that the count for 1
always matches the count for -1
. Here's a handful of examples, starting with the n = 100
example that started this whole thing:
>>> round_direction_counts(100)
Counter({1: 48, -1: 48, 0: 4})
>>> round_direction_counts(237)
Counter({-1: 118, 1: 118, 0: 1})
>>> round_direction_counts(24)
Counter({-1: 8, 0: 8, 1: 8})
>>> round_direction_counts(11523)
Counter({1: 5761, -1: 5761, 0: 1})
The code above is unoptimised and fairly slow, but I used it to run tests up to n = 50000
and checked that the counts were balanced in each case.
As an extra, here's an easy way to visualise the roundings for small n
: it produces a string containing +
for cases that round up, -
for cases that round down, and .
for cases that are exactly representable. So our theorem says that each signature has the same number of +
characters as -
characters.
def signature(n):
""" String visualising rounding directions for given n. """
return "".join(".+-"[rounding_direction(value)]
for value in sequence(n))
And some examples, demonstrating that there's no immediately obvious pattern:
>>> signature(10)
'+-.-+++.--'
>>> signature(11)
'+---+.+--++'
>>> signature(23)
'---+++-+-+-.-++--++--++'
>>> signature(59)
'-+-+++--+--+-+++---++---+++--.-+-+--+-+--+-+-++-+-++-+-++-+'
>>> signature(50)
'+-++-++-++-+.+--+--+--+--+++---+++---.+++---+++---'
The original proof I gave was unnecessarily complicated. Following a suggestion from Tim Peters, I realised that there's a much simpler one. You can find the old one in the edit history, if you're really interested.
The proof rests on three simple observations. Two of those are floating-point facts; the third is a number-theoretic observation.
Observation 1. For any (non-tiny, non-huge) positive fraction
x
,x
rounds "the same way" as2x
.
If y
is the closest binary64 float to x
, then 2y
is the closest binary64 float to 2x
. So if x
rounds up, so does 2x
, and if x
rounds down, so does 2x
. If x
is exactly representable, so is 2x
.
Small print: "non-tiny, non-huge" should be interpreted to mean that we avoid the extremes of the IEEE 754 binary64 exponent range. Strictly, the above statement applies for all x
in the interval [-2^1022, 2^1023)
. There's a corner-case involving infinity to be careful of right at the top end of that range: if x
rounds to 2^1023
, then 2x
rounds to inf
, so the statement still holds in that corner case.
Observation 1 implies that (again provided that underflow and overflow are avoided), we can scale any fraction x
by an arbitrary power of two without affecting the direction it rounds when converting to binary64.
Observation 2. If
x
is a fraction in the closed interval[1, 2]
, then3 - x
rounds the opposite way tox
.
This follows because if y
is the closest float to x
(which implies that y
must also be in the interval [1.0, 2.0]
), then thanks to the even spacing of floats within [1, 2]
, 3 - y
is also exactly representable and is the closest float to 3 - x
. This works even for ties under the roundTiesToEven definition of "closest", since the last bit of y
is even if and only if the last bit of 3 - y
is.
So if x
rounds up (i.e., y
is greater than x
), then 3 - y
is smaller than 3 - x
and so 3 - x
rounds down. Similarly, if x
is exactly representable, so is 3 - x
.
Observation 3. The sequence
1/2n, 3/2n, 5/2n, ..., (2n-1)/2n
of fractions is equal to the sequencen/n, (n+1)/n, (n+2)/n, ..., (2n-1)/n
, up to scaling by powers of two and reordering.
This is just a scaled version of a simpler statement, that the sequence 1, 3, 5, ..., 2n-1
of integers is equal to the sequence n, n+1, ..., 2n-1
, up to scaling by powers of two and reordering. That statement is perhaps easiest to see in the reverse direction: start out with the sequence n, n+1, n+2, ...,2n-1
, and then divide each integer by its largest power-of-two divisor. What you're left with must be, in each case, an odd integer smaller than 2n
, and it's easy to see that no such odd integer can occur twice, so by counting we must get every odd integer in 1, 3, 5, ..., 2n - 1
, in some order.
With these three observations in place, we can complete the proof. Combining Observation 1 and Observation 3, we get that the cumulative rounding directions (i.e., the total counts of rounds-up, rounds-down, stays-the-same) of 1/2n, 3/2n, ..., (2n-1)/2n
exactly match the cumulative rounding directions of n/n, (n+1)/n, ..., (2n-1)/n
.
Now n/n
is exactly one, so is exactly representable. In the case that n
is even, 3/2
also occurs in this sequence, and is exactly representable. The rest of the values can be paired with each other in pairs that add up to 3
: (n+1)/n
pairs with (2n-1)/n
, (n+2)/n
pairs with (2n-2)/n
, and so-on. And now by Observation 2, within each pair either one value rounds up and one value rounds down, or both values are exactly representable.
So the sequence n/n, (n+1)/2n, ..., (2n-1)/n
has exactly as many rounds-down cases as rounds-up cases, and hence the original sequence 1/2n, 3/2n, ..., (2n-1)/2n
has exactly as many rounds-down cases as rounds-up cases. That completes the proof.
Note: the restriction on the size of n
in the original statement is there to ensure that none of our sequence elements lie in the subnormal range, so that Observation 1 can be used. The smallest positive binary64 normal value is 2^-1022
, so our proof works for all n <= 2^1021
.