Search code examples
big-osml

How do I find the space and time complexities for this code


fun root(n) =
if n>0 then
let
  val x = root(n div 4);
in
  if (2*x+1)*(2*x+1) > n then 2*x
  else 2*x+1
end
else 0;

fun isPrime(n,c) =
if c<=root(n) then
  if n mod c = 0 then false
  else isPrime(n,c+1)
else true;

The time complexity for the root(n) function here is O(log(n)): the number is getting divided by 4 at every step and the code in the function itself is O(1). The time complexity for the isPrime function is o(sqrt(n)) as it runs iteratively from 1 to sqrt(n). The issue I face now is what would be the order of both functions together? Would it just be O(sqrt(n)) or would it be O(sqrt(n)*log(n)) or something else altogether?

I'm new to big O notation in general, I have gone through multiple websites and youtube videos trying to understand the concept but I can't seem to calculate it with any confidence... If you guys could point me towards a few resources to help me practice calculating, it would be a great help.


Solution

  • root(n) is O(log₄(n)), yes.

    isPrime(n,c) is O((√n - c) · log₄(n)):

    • You recompute root(n) in every step even though it never changes, causing the "... · log₄(n)".
    • You iterate c from some value up to root(n); while it is upwards bounded by root(n), it is not downards bounded: c could start at 0, or at an arbitrarily large negative number, or at a positive number less than or equal to √n, or at a number greater than √n. If you assume that c starts at 0, then isPrime(n,c) is O(√n · log₄(n)).

    You probably want to prove this using either induction or by reference to the Master Theorem. You may want to simplify isPrime so that it does not take c as an argument in its outer signature, and so that it does not recompute root(n) unnecessarily on every iteration.

    For example:

    fun isPrime n =
        let
          val sq = root n
          fun check c = c > sq orelse (n mod c <> 0 andalso check (c + 1))
        in
          check 2
        end
    

    This isPrime(n) is O(√n + log₄(n)), or just O(√n) if we omit lower-order terms.

    First it computes root n once at O(log₄(n)).

    Then it loops from 0 up to root n once at O(√n).

    Note that neither of us have proven anything formally at this point.

    (Edit: Changed check (n, 0) to check (n, 2), since duh.)

    (Edit: Removed n as argument from check since it never varies.)

    (Edit: As you point out, Aryan, looping from 2 to root n is indeed O(√n) even though computing root n takes only O(log₄(n))!)