I have Python code to solve some recursion and I want it to return some fractions.
The problem is that my code returns ridiculous fractions (which are correct) but they are not the smallest fractions possible, I know it since I can just solve the formula by hand.
Here's my code:
from __future__ import division
import sys
from fractions import Fraction
def t(n):
if n==0:
return 0
else:
return 1/(4-t(n-1))
print(Fraction(t(int(sys.argv[1]))))
If you run this code with python fraction.py 2
you should have 4/15 but here's what I get:
4803839602528529/18014398509481984
which is numerically correct I think, when I just calculate it i get 0.26666666666666666 (maybe both fractions are different at x digits from 0).
What is the problem here?
Convert to Fraction
objects before you do the math, not after:
def t(n):
n = Fraction(n)
if n==0:
return n
else:
return 1/(4-t(n-1))
If you do the math before converting to a Fraction
, the math gets done as floating point math, giving you an imprecise result which then you convert to an imprecise Fraction
.
If you convert first, then the math gets done as fraction math and things stay precise.