Search code examples
python-3.xalgorithmruntimebig-ocomputer-science

finding the worst-case runtime using O-notation


I am new to O-notation and am trying to determine the tightest bound of worst case run time using big-O notation from these given bounds:

  1. O(1)
  2. O(log n)
  3. O(n)
  4. O(n log n)
  5. O(n**2)
  6. O(n**3)
  7. O(2**n) As practice, I am adding the tightest bound in a comment for all my codes but I couldn't figure out which of the cases would be used with abstract list functions. As an example, I have two simple codes below to find the sums:

examples:

def sums1(L):
  even = list(filter(lambda x: x%2==0, L))
  odd = list(filter(lambda x: x%2==1, L))
  return even + odd

def susm2(L):
  if L == []: 
    return 0
  s1 = sum(L)
  s2 = sums2(L[1:])
  s3 = sums2(L[2:])
  return s1+s2+s3

From my understanding the first code would run O(n^2) times because two abstract lists are being added and the second would run O(2^n). However, I couldn't find any similar codes online to see if I was correct about this so I thought I'd ask here. Any guidance on how many times either will run is greatly appreciated. :D


Solution

  • Regarding sums1, I believe you will have a complexity of O(n), where n is the number of elements in L. To elaborate:

    def sums1(L):
        even = list(filter(lambda x: x%2==0, L)) # O(n)
        odd = list(filter(lambda x: x%2==1, L))  # O(n)
        return even + odd                        # O(n)
    

    Creating the odd and even lists takes O(n) for each, as you need to traverse L twice.

    Concatenating the two lists, odd and even in also linear, O(n). Because to concatenate two lists, on the worst case you have to traverse the first, until you find its last element.