I've been playing with wolfram language and noticed something: the same function written in different ways works very differently in terms of time.
Consider these two functions:
NthFibonacci[num_] :=
If [num == 0 || num == 1, Return[ 1],
Return[NthFibonacci[num - 1] + NthFibonacci[num - 2]]
]
Fibn[num_] := {
a = 1;
b = 1;
For[i = 0, i < num - 1, i++,
c = a + b;
a = b;
b = c;
];
Return [b];
}
NthFibonacci[30]
takes around 5 seconds to evaluate.
Fibn[900 000]
also takes around 5 seconds to evaluate.
So does the built-in Fibonacci[50 000 000]
I simply can't get why are there such differences in speed between the three. In theory, recursion should be more or less equivalent to a for loop. What is causing this?
It's because the recursive version you present does lots and lots of repeated calculations. Build a tree of the function calls to see what I mean. Even for an argument as small as 4, look at how many function calls are generated to get to a base case down each chain of the logic.
f(1)
/
f(2)
/ \
f(3) f(0)
/ \
/ f(1)
/
f(4)
\
\ f(1)
\ /
f(2)
\
f(0)
With your recursion, the number of function calls grows exponentially with the argument num
.
By contrast, your looped version grows linearly in num
. It doesn't take a very large value of n
before n
is a lot less work than 2n.