Search code examples
functional-programmingclosurespurely-functional

The place of closures in functional programming


I have watched the talk of Robert C Martin "Functional Programming; What? Why? When?" https://www.youtube.com/watch?v=7Zlp9rKHGD4

The main message of this talk is that a state is unacceptable in functional programming. Martin goes even further, claims that assigments are 'evil'.

So... keeping in mind this talk my question is, where is a place for closure in functional programming?

When there is no state or no variable in a functional code, what would be a main reason to create and use such closure (closure that does not enclose any state, any variable)? Is the closure mechanism useful?

Without a state or a variable, (maybe only with immutables ids), there is no need to reference to a current lexical scope (there is nothing that could be changed)?

In this approach, that is enough to use Java-like lambda mechanism, where there is no link to current lexical scope (that's why the variables have to be final).

In some sources, closures are meant to be a must have element of functional language.


Solution

  • A lexical scope that can be closed over does not need to be mutable to be useful. Just consider curried functions as an example:

    add = \a -> \b -> a+b
    add1 = add(1)
    add3 = add(3)
    [add1(0), add1(2), add3(2), add3(5)] // [1, 2, 5, 8]
    

    Here, the inner lamba closes over the value of a (or over the variable a, which doesn't make a difference because of immutability).

    Closures are not ultimately necessary for functional programming, but local variables are not either. Still, they're both very good ideas. Closures allow for a very simple notation of the most(?) important task of functional programming: to dynamically create new functions with specialised behaviour from an abstracted code.