Python computes the imaginary unit i = sqrt(-1) inaccurately:
>>> (-1) ** 0.5
(6.123233995736766e-17+1j)
Should be exactly 1j
(Python calls it j instead of i). Both -1 and 0.5 are represented exactly, the result can be represented exactly as well, so there's no hard reason (i.e., floating point limitations) why Python couldn't get it right. It could. And i=sqrt(-1) being the definition makes it rather disappointing that Python gets that wrong. So why does it? How does it compute the inaccurate result?
When complex arithmetic is required, your Python implementation likely calculates xy as ey ln x, as might be done with the complex C functions cexp
and clog
. Those are in turn likely calculated with real functions including ln
, sqrt
, atan2
, sin
, cos
, and pow
, but the details need not concern us.
ln −1 is πi. However, π is not representable in a floating-point format. Your Python implementation likely uses the IEEE-754 “double precision” format, also called binary64. In that format, the closest representable value to π is 3.141592653589793115997963468544185161590576171875. So ln −1 is likely calculated as 3.141592653589793115997963468544185161590576171875 i.
Then y ln x = .5 • 3.141592653589793115997963468544185161590576171875 i is 1.5707963267948965579989817342720925807952880859375 i.
e1.5707963267948965579989817342720925807952880859375 i is also not exactly representable. The true value of that is approximately 6.123233995736765886130329661375001464640•10−17 + .9999999999999999999999999999999981253003 i.
The nearest representable value to 6.123233995736765886130329661375001464640•10−17 is 6.12323399573676603586882014729198302312846062338790031898128063403419218957424163818359375•10−17, and the nearest representable value to .9999999999999999999999999999999981253003 is 1, so the calculated result is 6.12323399573676603586882014729198302312846062338790031898128063403419218957424163818359375•10−17 + 1 i.