I tried to compare multiple algorithms to find the largest prime number under "i". But when I tested the implementation, "aks" was slower than my naive implementation. I was thinking that aks was a better implementation for primality test, did I get this wrong?
def expand_x_1(n):
c =1
for i in range(n//2+1):
c = c*(n-i)//(i+1)
yield c
def aks(p):
if p==2:
return True
for i in expand_x_1(p):
if i % p:
return False
return True
def all_factors(x): # naive version
i = 2
s = int(x ** 0.5)
while i < s:
if x % i == 0:
return False
i += 1
return True
def find(i, func) :
while not func(i) :
i -= 1
print(i)
%time find(2**16, aks)
%time find(2**16, all_factors)
I tried to compare both and obtain:
65521
CPU times: user 1.7 s, sys: 3.24 ms, total: 1.7 s
Wall time: 1.7 s
65521
CPU times: user 81 µs, sys: 0 ns, total: 81 µs
Wall time: 83.9 µs
When the input is a prime, expand_x_1(n)
eventually yields all possible n//2+1
results, but the loop in all_factors(n)
goes around only about sqrt(n)
times. That's a huge difference right there.
But also very much in play:
c = c*(n-i)//(i+1)
grows without bound across iterations. Add
print(c.bit_length())
before the yield
, and you'll see that find(65521, aks)
is doing arithmetic on integers over 65000 bits wide before it's done. That too is far more expensive than
the 16-bit arithmetic find(65521, all_factors)
does.
Note: in practice, nobody uses the AKS primality test because even versions that have been massively optimized by world-class mathematicians are slower than alternatives. See AKS Primes algorithm in Python for more on that.