Search code examples
algorithmmathtimetime-complexity

Does the choice of compiler and language affect Time Complexity?


When people speak about the time complexity involved with solving mathematical problems using computer science strategies, does the choice of compiler, computer, or language have an affect on the times a single equation might produce? would running an algorithm in assembly on an x86 machine produce a faster result than creating the same formula in java on a x64 machine?

Edit: this question sort of branches off from the original, if the choice of compiler and language don't matter, then is an algorithm itself the single determining factor of its time complexity?


Solution

  • The whole point of asymptotic analysis is so that we don't have to consider the nuances introduced by varying compilers, architectures, implementations, etc. As long as the algorithm is implemented a way that follows the assumptions used in the time analysis, other factors can be safely ignored.

    I'm primarily talking about non-trivial algorithms though. For example, I might have the code:

    for(int i=0; i<N; i++){}
    

    Standard analysis would say that this snippet of code has a runtime of O(N). Yet a good compiler would realize that this is merely a nop and optimize it away, leaving you with an O(1). However, compilers are not smart enough (yet) to make any asymptotically significant optimizations for anything non-trivial.

    An example where certain analysis assumptions do not hold is when you don't have random access memory. So you have to be sure that your programming platform meets all these assumptions. Otherwise, a different analysis will have to performed.