Search code examples
architectureturing-machinesparadigmsimperative-programmingvon-neumann

Link between models of computation, computer system architectures and programming paradigms


I have been reading about these topics for a while and may have understood something. But I am confused with some connections:

i. Turing Machine (RAM model to be exact) & Imperative Programming

Lambda Calculus & Functional Programming

ii. Von Nueman System Architecture & Imperative Programming

I almost got the connection in (i) but I got nothing for (ii). However, from the Turing lecture of Backus, I think there is some link between the 2. In many places I even saw imperative paradigm written as 'Von Nueman Paradigm'. So did the Von Neuman System Architecture somehow help in the growth of Imperative Languages and had the situation be different if we followed some other system architecture - say the Howard Architecture?


Solution

  • Backus's paper, which you linked, addresses this directly (emphases mine):

    …we can crudely characterize three classes of models for computing systems…

    2.2.1 Simple operational models. Examples: Turing machines, various automata. …

    2.2.2 Applicative models. Examples: Church's lambda calculus [5], Curry's system of combinators [6], pure Lisp [17], functional programming systems described in this paper. Foundations: concise and useful. History sensitivity: no storage, not history sensitive. Semantics: reduction semantics, no states. Program clarity: programs can be clear and conceptually useful.

    2.2.3 Von Neumann models. Examples: von Neumann computers, conventional programming languages. Foundations: complex, bulky, not useful. History sensitivity: have storage, are history sensitive. Semantics: state transition with complex states. Program clarity: programs can be moderately clear, are not very useful conceptually.

    The above classification is admittedly crude and debatable.

    If I could distill this even further:

    • the Von Neumann architecture allows (programmer-written) instructions to update memory (that is, to mutate state).
    • functional programming has no concept of mutable state.

    FP languages, e.g. Haskell, currently compile to imperative machine code which executes on Von Neumann computers. Functional programmers generally avoid thinking about mutating memory to an extent, preferring the compiler to figure that part out.

    One way to look at this is that FP languages provide a total abstraction over the physical plumbing of the Von Neumann architecture. However, this does beg the question of whether a fundamentally different architecture might be a better match to functional languages.

    Which brings us to: the Reduceron. The Reduceron in its current form is a Field-Programmable Gate Array (FPGA) which demonstrates potential benefits that a physical machine tailor-made for FP evaluation might have.

    In short, the Reduceron takes a functional program, which is a stateless graph of function applications, and breaks it up into a massive collection of parallel function applications. It then runs those parallel applications on input data.

    It can get away with parallelizing your entire program because in FP it doesn't usually matter when a function application is performed, since there is no mutable state and therefore no possible race condition. The only possible delay is in terms of dependency availability – do you have an input or not. If you do, it is always safe to feed the input to the function.

    Now, as I understand it (and here I am going out on a limb a bit), an FPGA is a relatively cheap way for researchers to see how these ideas hold up in the physical world. Instead of designing and printing an integrated circuit, and then ordering wholesale numbers from e.g. Intel or AMD, academicians can just program a gate array on an individual off-the-shelf FPGA (again, if I understand correctly – I am not a hardware guy).

    Initial results appear very promising! But in practice we don't see hardware manufacturers flocking towards actually putting out entire new chip lines for FP languages. The existing knowledge, infrastructure, and demand for something like an Intel CPU is massive. Imperative programming remains far more common than functional, and that seems unlikely to change in the near future.


    Side note: I'm assuming that your question about the "Howard" architecture is a misspelling of "Harvard". The Harvard Architecture is quite similar to the Von Neumann architecture for the purposes of this topic.