I have this algorithm:
function f(arr, i, n) {
if (i == n - 1) {
return arr[i];
}
if (i == 0) {
return (arr[i] + f(arr, i + 1, n)) / n;
} else {
return arr[i] + f(arr, i + 1, n);
}
}
It is a recursive algorithm, but it only calls once per lap.
I thought it could be an algorithm with an O (1 ^ n) notation, but if that's the case,
(1 ^ 1) = 1 (1 ^ 2) = 1 (1 ^ 3) = 1
etc
So wouldn't it be an O (1)? With a constant Big O notation?
Before even thinking about time complexity, ask yourself: what does this algorithm even do? This question helps you build intuition for evaluating your analysis. In this case, we're taking the average of an array of numbers.
Secondly, what is O(1)? That means no matter how large the array is, the algorithm takes the same time to run. So your analysis is basically arguing that taking the average of an array with 15 elements in it takes the same amount of time as an array with 20 million elements in it. So something is wrong here.
This is O(n) because the algorithm visits each element once, T(n) = T(n - 1) + 1
. There is 1 recursive call possible per frame, and it uses i + 1
to make the next recursive call, stepping through the array much like a loop would, and constant work done per call frame.
As an aside, this is a rather silly algorithm to write recursively because the problem space isn't broken down much per call. You can easily blow the stack if the array is large enough and there is a lot of overhead incurred from function calls (at least in language implementations without tail call optimization, which effectively turns recursion with no computation after the recursive calls into a loop). Not to mention, the code isn't any easier to understand than a loop. Prefer recursion for problems with logarithmic stack depth, not linear, unless the problem is very small and the recursion tree is wide (exponential, for example), which dominates.