We compute the Fibonacci number:
def fibo_memo(i, memo={}):
if i <= 0:
return 0
elif i == 1:
return 1
elif i in memo:
return memo[i]
else:
memo[i] = fibo_memo(i-2, memo) + fibo_memo(i-1, memo)
return memo[i]
def fibo_dp(i):
if i <= 0:
return 0
elif i == 1:
return 1
dp = [0] * (i + 1)
dp[1] = 1
for j in range(2, i + 1):
dp[j] = dp[j-1] + dp[j-2]
return dp[i]
assert(fibo_memo(100) == fibo_dp(100))
Now time it:
i = 10
%timeit fibo_memo(i) # 73 ns
%timeit fibo_dp(i) # 309 ns
i = 100
%timeit fibo_memo(i) # 73 ns
%timeit fibo_dp(i) # 2.54 micro seconds
i = 1000
%timeit fibo_memo(i) # 73 ns
%timeit fibo_dp(i) # 33 micro seconds
Why does memoization result in almost constant time, unlike dynamic programming?
The mutable default argument of fibo_memo
, which holds the memoized values, persists between iterations of %timeit
. So all but the first iteration complete near-instantly. (And in fact, for i = 10
and i = 100
, the first iteration is probably near-instant too, since the assert
check populated the memoization dict up to 100.)
For a like-for-like comparison, fibo_memo
should be called with an empty memoization dict each time:
i = 10
%timeit fibo_memo(i, {})
%timeit fibo_dp(i)
i = 100
%timeit fibo_memo(i, {})
%timeit fibo_dp(i)
i = 1000
%timeit fibo_memo(i, {})
%timeit fibo_dp(i)
In my quick run on Google Colab, this shows very similar scaling behaviour for both solutions (roughly the expected O(N)
, with the memoization solution taking about 3 times as long as the DP for each value of i
).