This is my recursive function:
function abc(n):
if n == 0
return xyz(n)
for i = 1 to n
print(xyz(n))
return abc(n/2) + abc(n/2)
and xyz() is ϴ(n^3). Will the Master theorem will be valid here? If, yes How will I write it?
The master theorem concerns recurrence relations of this form:
T(n) = a * T(n/b) + f(n)
T
being the recursive procedure, a
the number of subproblems into which we divide the input n
, n/b
the size of each subproblem and `f(n) the cost for the division of the input into subproblems and the combination of the results.
If n == 0
then n/b
becomes 0, and so does a
. This leaves us with:
T(0) = 0 + f(0)
Since there's no more recursion, it basically comes down to f(0)
. In your hypothetical case this has a complexity ϴ(n^3).
Since f(n)
is the cost for the division of n
into a
subproblems and the combination of results, f(0)
would normally have a cost of 0 or a constant. If function f(n)
has a complexity of ϴ(n^3), then actually for n == 0
this still leads to a cost of 0 with regards to input size.
The master theorem provides information on the asymptotic bound for T(n)
, depending on the complexity of f(n)
, a
and b
. This depends on how the complexity of f(n)
can be expressed using a form that employs logb(a)
(log with base b of a). The log of 0 is undefined with b > 0.
What it comes down to is that it makes no sense to ask whether the master theorem holds for some specific input. Moreover, the master theorem holds anyway, it just states that depending on f(n)
you can make some claims about the complexity of T
or not. This depends on a
and b
, so without that information it is senseless to ask. If your f(n)
has O(n^3) outside of the base case as well (n > 0) then you could make claims about T depending on how that 3 relates to a
and b
. For example, if 3 < logb(a)
you'd be sure that T is ϴ(n^(logb(a)).
Suppose that the a
in your algorithm is 2^n
, then the master theorem could no longer be used to say anything about T's complexity.
EDIT
After your question edit, the form of your recursive procedure has become this:
T(n) = 2 * T(n/2) + f(n)
So a == 2
and b == 2
are the parameters in your case, since you divide the input into two subproblems which each get an input that's half of the input doing the recursion. The combination of the two recursive calls is constant (a simple addition abc(n/2) + abc(n/2)
) and the division of the problems is trivial too, but this part in your case could simulate a ϴ(n^4) algorithm for dividing the input into subproblems:
for i = 1 to n
print(xyz(n))
Note that it's ϴ(n^4) because you stated xyz(n)
is ϴ(n^3) and you repeat it n times in the loop. So your f(n) = ϴ(n^4)
.
The master theorem can't really state anything about this. However, if f(n) = Ω(n^4)
(note the omega here), then 4 > log2(2)
(the logb(a) with b = 2 and a = 2 in your case). In order to make a statement about T's complexity, another condition must now hold, the regularity condition. It states that a * f(n/b) <= k * f(n)
must be true for some k < 1 and sufficiently large n.
So that gives us 2 * f(n/2) <= k * f(n)
. This is true for k < 1/8. This, finally, lets us state that T = ϴ(f(n))
, so T = ϴ(n^4)
.
Meaning that final part is true if your f(n) (the loop with the xyz call) can be proven to be Ω(n^4) (again, note the omega instead of theta). Since omega is the lower bound, and your f(n) = ϴ(n^4), that should be true.