I have tried to create a program to calculate the base-10 logarithm based on the Taylor series-based algorithm described in "The Mathematical-Function Computation Handbook" (I found an online copy via my University's library).
A similar algorithm is given on another question on StackOverflow for which I cannot find the link right now.
10.3.2 Computing logarithms in a decimal base
For a decimal base, the base-10 logarithm is the natural choice, and the decomposition of the argument into an exponent and a fraction gives us a decimal representation: x = (−1)^s × f × 10^n, either f = 0 exactly, or f is in [1/10, 1).
If f ≤√1/10, set f = 10 × f and n = n − 1, so that f is now in the interval (√1/10,√10]. Then introduce a change of variable, a Taylor-series expansion, and a polynomial representation of that expansion:
z = (f − 1)/( f + 1),
f = (1 + z)/(1 − z),
D = 2 log10(e)
= 2/ log(10)
log10( f) = D × (z + z3/3 + z5/5 + z7/7 + z9/9 + z11/11 + · · · )
≈ D × z + z3Q(z2), polynomial fit incorporates D in Q(z2).
For f in (√1/10,√10], we have z in roughly [−0.5195,+0.5195]. The wider range of z requires longer polynomials compared to the binary case, and also makes the correction term z3Q(z2) relatively larger. Its magnitude does not exceed |0.35z|, so it provides barely one extra decimal digit of precision, instead of two. Accurate computation of z is easier than in the binary case: just set z = fl(fl(f−12)−12)/fl(f+1).
For this I wrote this program in Python:
def log10(x):
n = 0.0 #Start exponent of base 10
while (x >= 1.0):
x = x/10.0
n+=1
# if x <= sqrt(1/10)
if(x<=0.316227766016838):
x = x*10.0
n = n-1
#Produce a change of variable
z = (x-1.0)/(x+1.0)
D = 4.60517018598809 #2*log10(e)
sum = z
for k in range(3,111,2):
sum+=(z**k)/k
return D*n*sum
I compared the results to the math.log10
function, and the results are not as expected. My biggest issue when debugging is understanding the algorithm and why it works.
Here is my source code after the suggested corrections (changed return statement to D*sum+n
fixed the value of D
, and changed if(x<=0.316227766016838)
to while(x<=0.316227766016838)
. I added some if
statements to handle exceptional cases.
The code below works well within my target precision of 6 digits (I tested it with very small input, large input).
def log10(x):
# Handle exceptional cases
if (x == 1):
return 0
if (x == 0):
return float('-Inf')
if (x < 0):
return float('nan')
n = 0 #Start exponent of base 10
while (x >= 1.0):
x = x/10.0
n+=1
# if x <= sqrt(1/10)
while(x<=0.316227766016838):
x = x*10.0
n = n-1
#Produce a change of variable
z = (x-1.0)/(x+1.0)
D = 0.868588964 #2*log10(e)
#Taylor series
sum = z
for k in range(3,23,2):
sum+=(z**k)/k
return D*sum+n