Reading about FRP (Functional Reactive Programming) I'm amazed about how intuitive and logical it seems compared to the standard imperative approach; one thing however puzzles me.. How doesn't the computer immediately run out of memory doing it?
From what I've gathered from [here], is that in FRP the complete history (past, present, and future) of a value's change is first class. That notion immediately rings an alarm in my head saying it has got to eat up your memory very fast if it's used in an environment where the past of the value isn't cleared from memory immediately.
Reading about [Fran], I've noticed several of the examples having recursively defined functions with no termination condition. If the function never terminates and returns its value to the function calling it, how is it ever going to get anything done? Or for that matter, how's it not blowing the stack after a while? Even a lazy language like Haskell will run into stack overflows at some point.
An explanation of these things would be greatly appreciated, as it completely baffles me.
The fact that this can work for simple cases should not be much of a surprise: we already comfortably use infinite data structures in Haskell thanks to laziness and garbage collection. As long as your final result does not depend on having all your values at once, they can be collected as you go along or not forced in the first place.
This is why this classical Fibonacci example runs in constant¹ space: previous entries in the list are not needed once the next two are calculated, so they are collected as you go along—as long as you do not have any other pointers to the list.
fib n = fibs !! n
where fibs = 0 : 1 : zipWith (+) fibs (drop 1 fibs)
Try running this function for different inputs and looking at memory usage. (Run it with +RTS -s
.)
(If you want a more detailed explanation with diagrams, take a look at this post I wrote.)
The point is, even if an unbounded amount of information is available to the programmer, we can still garbage collect most of it if nothing else depends on it.
Exactly the same logic can be used to implement FRP programs efficiently.
Of course, everything is not that easy. In the fibs
example, the memory usage would go way up if we had an active pointer to the beginning of the fibs
list. The same thing happens with FRP if you have a computation that depends on too much past data: it's called a time leak.
Dealing with time leaks is one of the open problems with implementing an efficient, well-behaved FRP framework. It's difficult to provide expressive FRP abstractions without allowing the possibility of poor or even catastrophic memory usage. I believe most current approaches end up providing abstract FRP types along with a blessed set of operations that is less likely to cause these sorts of leaks; a particularly extreme form of this is Arrowized FRP which does not provide a behavior/signal type at all but rather expresses everything with transformations between signals (as arrows).
I've never tried to implement a nice FRP system myself, so I can't really explain the problems in any more detail. If you're interested in more details on this topic, a great place to look is Conal Elliott's blog—with this post as a good starting point. You can also take a look at some of the papers he's written like "Push-Pull Functional Reactive Programming" as well as other papers on the subject, including some about Arrowized FRP like "Functional Reactive Programming, Continued" (chosen almost at random).
footnotes
¹ It's not really constant space because the intermediate results get bigger themselves. But it should maintain a constant number of list cells in memory.