Search code examples
concurrencytheoryquantum-computing

Turing complete and parallel programming (true concurrency)


I often see people say if you can do X in some language you can do Y in another language which is the Turing Complete argument. So You'll have often (usually in a snide comment) "sure you can do t with y because y is also Turing complete.

I took CS theory a long time ago but I don't think this is always true because I'm not sure where Turing fits into concurrency. For example there are programming languages with the right hardware you can execute things to happen exactly at the same time but others where that is not possible.

I understand this is probably more of a hardware/driver issue than the language but I'm curious if or how does concurrency change what it is to be Turing Complete? Can be you be more than Turing Complete?

EDIT: The original reason that I asked this question was in large part due to quantum computing. Although the accepted answer doesn't say this but quantum computing is (ostensible) a subset of turing.


Solution

  • This is a confusing topic for many people; you're not alone. The issue is that there are two different definitions of "possible" in play here. One definition of "possible" is how you're using it: is it possible to do concurrency, is it possible to operate a giant robot using the language, is it possible to make the computer enjoy strawberries, etc. This is the layman's definition of "possible".

    Turing completeness has nothing to do with what's possible in the above sense. Certainly, concurrency isn't possible everywhere because (for at least some definition of concurrency) it needs to be the case that the language can produce code that can run on two or more different processors simultaneously. A language that can only compile to code that will run on a single processor thus would not be capable of concurrency. It could still be Turing-complete, however.

    Turing completeness has to do with the kinds of mathematical functions that can be computed on a machine given enough memory and running time. For purposes of evaluating mathematical functions, a single processor can do everything multiple processors can because it can emulate multiple processors by executing one processor's logic after the other. The (unproven and unprovable, though falsifiable) statement that all mathematical functions that could be computed on any device are computable using a Turing machine is the so-called Church-Turing thesis.

    A programming language is called Turing-complete if you can prove that you can emulate any Turing machine using it. Combining this with the Church-Turing thesis, this implies that the programming language is capable of evaluating every type of mathematical function that any device could evaluate, given enough time and memory. Most languages are Turing-complete because this only requires the capacity to allocate dynamic arrays and use loops and if-statements as well as some basic data operations.

    Going in the other direction, since a Turing machine can be constructed to emulate any processor we currently know of and programming languages do what they do using processors, no current programming language has more expressive power than a Turing machine, either. So the computation of mathematical functions is equally possible across all common programming languages. Functions computable in one are computable in another. This says nothing about performance - concurrency is essentially a performance optimization.