Search code examples
algorithmruntimetime-complexitytheory

Runtime of Algorithms In Various Programming Languages


How would the run time of algorithms differ in various programming languages? For example, assuming that an algorithm's run time is stated as Θ(n2) , would it differ in any way if it were ran in Haskell, Java, C, etc.?

EDIT: My question has been answered, but I would still like additional input if you guys have the time to spare. My professor is letting us do our own programming project for the quarter and this was an idea I came up with and I just want to make sure that there's enough for me to discuss. I am having second thoughts on this so if anyone else would like to suggest an idea(s) or modify/build upon this one, I'd greatly appreciate it.


Solution

  • The time complexity of an algorithm is heavily tied to the specific way in which it's implemented. For example, Dijkstra's algorithm will run in time O(m + n log n) if you use a Fibonacci heap, takes time O(m log n) if you use a binary heap, and takes time O(n2) if you use an array-based priority queue.

    I mention this because if you say that a particular algorithm runs in time Θ(n2), you're saying that the algorithm, when implemented with specific data structures in a specific way, will run in time Θ(n2). Provided that you can faithfully implement the appropriate data structures and other basic operations in the programming language of your choice, the runtime will then be Θ(n2). The constant factors might vary greatly based on the particular language and the particular way in which that Θ(n2) algorithm is translated into the language, but there's no fundamental reason that an algorithm in C should run asymptotically faster than the same algorithm in Java, provided that it can also be represented in Java.

    That said, certain programming languages might not support, or at least efficiently support, certain operations. Purely functional Haskell code, ignoring various monadic tricks, doesn't support variable assignment and mutation, so many algorithms that run very quickly in an imperative model won't work efficiently in Haskell. There's been a lot of research into purely functional algorithms and data structures, and there's a result (IIRC) that says that any imperative algorithm can be converted into a functional algorithm that has a slowdown of O(log n). In some cases, there's no slowdown at all (for example, in purely functional binomial heaps or red/black trees). If you look at Haskell and then allow for state using monadic tricks, then to the best of my knowledge there's no slowdown because the underlying implementation can optimize the code directly into an imperative equivalent.

    As another example, single-threaded programming languages might not be able to implement parallel algorithms, so even if the parallel algorithm runs in, say, time Θ(f(n)), the best you might be able to do in a language without threads might be ω(f(n)). Languages that don't support bit manipulations (if they exist) might not be able to take advantage of certain bit-hacky tricks that shave O(log n) factors off of certain types of algorithms.

    You sometimes do in practice see a slowdown when implementing algorithms in different programming languages because there are subtle differences between how those programming languages implement certain structures. For example, if you're not used to C++, it's easy to inadvertently have a lot of objects passed by value rather than by pointer or reference, adding in an extra cost due to the cost of copying objects. If you don't already know that the std::map implementation is usually some balanced BST, you might end up introducing an extra O(log n) factor due to lookup costs. That said, this is less about the specific language than it is about the specific implementation within that language.

    Hope this helps!