I came across some weird behavior by math.cos()
(Python 3.11.0):
>>> import math
>>> math.cos(math.pi) # expected to get -1
-1.0
>>> math.cos(math.pi/2) # expected to get 0
6.123233995736766e-17
I suspect that floating point math might play a role in this, but I'm not sure how. And if it did, I'd assume Python just checks if the parameter equaled math.pi/2
to begin with.
I found this answer by Jon Skeet, who said:
Basically, you shouldn't expect binary floating point operations to be exactly right when your inputs can't be expressed as exact binary values - which pi/2 can't, given that it's irrational.
But if this is true, then math.cos(math.pi)
shouldn't work either, because it also uses the math.pi
approximation. My question is: why does this issue only show up when math.pi/2
is used?
Any error in math.pi
vs. π (there always is some) makes very little difference in one case math.cos(math.pi)
and is quite significant in math.cos(math.pi/2)
.
The curve is flat
When math.cos(x)
is very near 1.0, the curve is very flat: the slope is "close" to zero. About 47 million floating point x
values near π have a cos(x)
mathematically more than -1.0, yet their value is closer to -1.0 than the next encodable value of -0.99999999999999988897...
The curve's slope is 1
With x
near π and math.cos(x/2)
near 0.0, the cosine curve has a |slope| "close" to one. Both the next smaller and next larger encodable x
have a different cos(x/2)
.
Conclusion
When the |result| of sin(x)
or cos(x)
is near 1.0, many nearby x
values will report 1.0.
This would be true even if some x
value was incredible close to π.
For x
near π (like math.pi
) and y = |cos(x)|
, we need about twice the precision in y
to see an imprecision in x
.