I created a simple function to solve e to the power pi explained here
def e_to_power_pi(number):
return (1 + (1/number)) ** (number * math.pi)
from the look of it, clearly simple piece of code. But look at the output difference of these two values:
Example one:
e_to_power_pi(1000000000000000)
output:
32.71613881872869
Example Two:
e_to_power_pi(10000000000000000)
output:
1.0
upon tear down of the code, I learnt that 1.0 is coming from this portion 1 + (1/number) of the code above.
When I tore it down further, I learnt that 1/10000000000000000
outputs correct answer as it should 0.00000000000000001
.
But when I add 1
to result it returns 1.0
instead of 1.00000000000000001
.
I presumed that it must be default round off in python that may be changing the value.
I decided to use round(<float>, 64) # where <float> is any computation taking place in code above
to try and get 64 digits post decimal result. But still I got stuck with the same result when addition was performed i.e. 1.0
.
Can someone guide me or point me to the direction where I can learn or further read about it?
You are using double-precision binary floating-point format, with 53-bit significand precision, which is not quite enough to represent your fraction:
10000000000000001/10000000000000000 = 1.0000000000000001
See IEEE 754 double-precision binary floating-point format: binary64
Mathematica can operate in precisions higher than the architecturally imposed machine precision.
See Wolfram Language: MachinePrecision
The Mathematica screenshot below shows you would need a significand precision higher than 53-bit to obtain a result other than 1.
N
numericises the fractional result to the requested precision. Machine precision is the default; higher precision calculations are done in software.