Search code examples
programming-languagesconcurrencyclosuresmulticore

Why are closures suddenly useful for optimizing programs to run on multiple cores?


I read an article that claims that closures (or "blocks") are a useful weapon in the "War on Multicores", because

[...] they allow you to create units of work, which each have their own copy of the stack, and don’t step on each others toes as a result. What’s more, you can pass these units around like they are values, when in actual fact they contain a whole stack of values (pun intended), and executable code to perform some operation.

Now, I am not debating the usefulness of closures in general and possibly also for concurrent programming in a shared-memory model, but what's the difference with a thread that only acts on local data (or processes, or actors, or ...)?

Isn't a closure on its own as useful for concurrent programming as a thread without a scheduler?

What with closures that have non-local side effects?


Solution

  • The argument is that having closures in your programming language makes it easier to have some work done in another thread. I think the author should have mentioned the importance of higher-order function in that argument.

    My favorite introduction to higher-order functions is "Why functional programming matters", I won't try to present a bad replica here.

    So using closures doesn't give you parallelism for free if you're going do execute closures in for loops, e.g.

    for (int i = 0; i < numElements; i++) {
      result[i] = closure(inputs[i], i);
    }
    

    because the language can't tell if closure(a, b) somehow changes other values in the result or inputs arrays. But languages with higher-order functions like map specify that the function passed to map shouldn't look at or change other values in the inputs, and prevent it from affecting other results. So, code like the following, which is common in functional languages, can be parallelized for you, without you needing to create a pool of worker threads and hand off the closure to them:

    results = map(closure, inputs, [0..numElements-1]);
    

    In these languages, closures take away the pain of declaring a new function somewhere for short pieces of code. That makes it more fun to use higher-order functions.

    The following Haskell code defines a function f that takes a list of numbers and returns a list where each input i is replaced with 2i+1. By saving you the hassle of creating a function to compute 2i+1 this is 1 line of code instead of 2.

    f nums = map (\i -> 2*i+1) nums
    

    Again, see "Why functional programming matters" for strong arguments as to how this scales up to real code bases.