I'm trying to understand time complexity-wise what to expect running fibonacci with recursion vs. recursion with memoization & recursion.
I expect using only recursion - (2^n) - to be slower than with memoization and recursion - O(n). That is, memoization is one way to do dynamic programming, an optimization technique...so, I'd expect it to be faster.
First, run using recursion.
Time: 10ms leetcode: beats 23%
public int fib(int n) {
if (n < 2) {
return n;
}
return fib(n-1) + fib(n-2)
}
Second, run with memoization and recursion.
Time: 29ms leetcode: beats 13%
public int fib(int n) {
if (n < 2) {
return n;
}
int[] dp = new int[n+1];
if (dp[n] != 0){
return dp[n];
}
int result = fib(n-1) + fib(n-2);
dp[n] = result;
return result;
}
You can see that using recursion alone is faster. Why?
You're not memoizing. The array isn't shared from one call to the next, each call creates its own array. It's slow because you do extra work, allocating an array for each call, but never get any benefit, and the function is still just as tree-recursive as before.
If you want to memoize you need to share the data structure storing your memoized data so that a new call can access the results of previous calls.
There are different ways to do that, one very basic way is to provide the memoized data as another argument:
// initial call used by outside world, sets up the array
public int fib(int n) {
int[] dp = new int[n + 1];
return fib(n, dp);
}
// overloaded version that takes memo data, used internally
public int fib(int n, int[] dp) {
if (n < 2) {
return n;
}
if (dp[n] != 0){
return dp[n];
}
int result = fib(n-1, dp) + fib(n-2, dp);
System.out.printf("call %d = %d\n", n, result);
dp[n] = result;
return result;
}
with a printf added to show that each value gets calculated only once, this prints out
call 2 = 1
call 3 = 2
call 4 = 3
call 5 = 5
call 6 = 8
call 7 = 13
call 8 = 21
call 9 = 34
call 10 = 55