Here is an algorithm I am trying to analyse (see below). I do not understand why this has a O(n)
time complexity when the merge sorts has O(n logn)
, they both seems to be doing the same thing.
then both have the same j time complexity, if you left j be the row then 2^j X c(n/2^j)
= cn
and they both have a running time of log n
, where n is the number of elements.
Algorithm: BinarySum(A, i, n)
Input: An array A and integers i and n.
Output: The sum of the n integers in A starting at index i.
if n = 1 then
return A[i]
return BinarySum(A, i, [n/2] ) + BinarySum(A, i + [n/2], [n/2])
thanks, daniel
You are processing for constant time each member of an array. No matter how are you doing this, the resulting complexity will be O(n). By the way, if you use a pen and paper method for a simple example, you'll see that in fact you are calling array's elements exactly in order they appear in the array, which means that this algorithm is equivalent to simple iterative summation.
Formal proof for O(n) complexity follows directly from the Master theorem. Recurrence relation for your algorithm is
T(n) = 2 T(n/2)
which is covered by case 1 of the theorem. From this complexity is calculated as
O(n ^ log_2_(2)) = O(n)
As for merge sort, its recurrence relation is
T(n) = 2 T(n/2) + O(n)
which is a totally different story - case 2 of the Master theorem.