My way of calculating pi is:
import math
a = 2
n = int(input('number:'))
for i in range(n):
a = 2 - math.sqrt(4-a)
print((2**(i+2))*math.sqrt(a))
If this worked well, the printout would have become very close to pi. But when I tried this out, it seemed to converge into pi value, but became bigger and bigger than pi, reached 4, and dropped to 0.
How can I solve this problem?
While mathematically the formula you are trying to implement is true, it assumes that all computations are performed with exact precision. Meanwhile, computers represent floating-point numbers only with certain accuracy, and all operations on these numbers involve rounding errors. In some cases, such as in this case, these errors can accumulate and entirely distort the results of computations.
To alleviate this issue, you can use Python decimal library which lets one perform computations with higher precision than the one provided by the standard Python floating-point numbers. For example, you can try the following:
from decimal import Decimal, getcontext
# set precision to 30 places
getcontext().prec = 30
n = 10
a = Decimal(2)
for i in range(n):
a = 2 - Decimal(4 - a).sqrt()
print((Decimal(2)**Decimal(i + 2)) * a.sqrt())
This gives:
3.06146745892071817382767987224
3.12144515225805228557255789567
3.13654849054593926381425804437
3.14033115695475291231711852416
3.14127725093277286806201976961
3.14151380114430107632851506557
3.14157294036709138413580013914
3.14158772527715970062885414319
3.14159142151119997399797180376
3.14159234557011774234037803153