Search code examples
algorithmterminologyimplementation

Program efficiency: Algorithm vs Implementation


I was watching a lecture about program efficiency where the professor said:

"If I measure running time, it will certainly vary as the algorithm change. (...) But one of the problems is that it will also vary as a function of the implementation (...) If I use a loop that's got a couple of more steps inside of it in one algorithm than another, it's going to change the time."

I am having a hard time wrapping my head around the implementation's influence.

So my question is: Why can't we consider those extra loop steps inside of one algorithm, when compared to the other, simply something that is necessary for it to run and that is also a part of the algorithm's efficiency? Or did I completely miss the point here?

Thank you!


Solution

  • They are pointing out the difference between "algorithm" and "specific code written in a programming language". "Algorithm" is somewhat of a vague term and "algorithms" are often described in pseudo-code that can be either very detailed, or not detailed at all.

    For instance, consider the following algorithm, to test whether a number n is prime or not:

    • If any number d between 2 and the square root of n divides n:

    • Return False (n is not prime)

    • Else:

    • Return True (n is prime)

    How exactly do you loop over the numbers between 2 and the square root? You could do:

    // IMPLEMENTATION A
    
    bool is_prime(int n)
    {
        int s = sqrt(n);
        for (int d = 2; d <= s; d++)
        {
            if (n % d == 0)
                return false;
        }
        return true;
    }
    

    or:

    // IMPLEMENTATION B
    
    bool is_prime(int n)
    {
        for (int d = 2; d * d <= n; d++)
        {
            if (n % d == 0)
                return false;
        }
        return true;
    }
    

    Those two codes both implement the algorithm that I described. However, they might not have exactly the same runtime, since the former requires computing sqrt(n) once, but the latter requires computing d * d at every iteration in the loop.

    Computer scientists want to be able to discuss the complexity of the algorithm that I described above in pseudo-code. And they don't want someone to give the boring answer "Sorry, the complexity of that algorithm is impossible to calculate, because we don't know if it's implementation A or implementation B".