Search code examples
haskelljava-streamlazy-evaluationevaluation

What is the trade-off between Lazy and Strict/Eager evaluation?


So this concept of lazy evaluation gets thrown around a lot especially when reading about functional programming, java streams, etc.

Streams are lazy; computation on the source data is only performed when the terminal operation is initiated, and source elements are consumed only as needed.

Haskell is lazy. That means that unless specifically told otherwise, Haskell won't execute functions and calculate things until it's really forced to show you a result.

Now the way I have understood this is that if I have a List of data I wish to perform N operations on, lazy evaluation will only ever make 1 pass over the entire list as opposed to N. Why is this so desirable ? It seems to me that making N passes over a single list results in the same number of operations as making 1 pass over the list but performing N operations at each element contained in the list.

My questions are:

  1. Is lazy evaluation always good and if not, what trade off are we making by accepting it ?
  2. How to analyze the performance of lazy algorithms ?
  3. What are some typical use cases of lazy evaluation ?
  4. Does a programmer have any control over this ? Can I make lazy functions in a language which does not support lazy evaluation right outside the box ?

Could someone please answer this in a language agnostic manner since I am more curious about the concept rather than a particular language.


Solution

  • To a certain extent this is a topic you could write a book about, but I think we can give a StackOverflow-sized overview.

    A quick note on terminology

    Technically speaking, strict-vs-non-strict and eager-vs-lazy are two different distinctions talking about different things. Strictness is technically a property of program semantics, used when we're talking about a level where there is no such thing as actual computers, RAM, evaluation, etc. Whereas lazy evaluation is a strategy for actually evaluating programs, and eagerness is the opposite strategy.

    However generally one uses lazy evaluation (at the level of the entire language) for the goal of implementing a non-strict semantics, and if one wants a strict semantics one uses eager evaluation. So lazy and non-strict often get used interchangeably when being less formal, and likewise eager and strict are often used interchangeably.

    1. Is lazy evaluation always good and if not, what trade off are we making by accepting it ?

    It is absolutely not always good. Lazy evaluation is generally regarded as worse for performance than eager evaluation; it usually involves allocation of memory structures that "remember" the operation for later, which is slower than just doing the operation if you're definitely going to do it anyway.

    Naively doing everything lazily typically adds a constant factor overhead over doing exactly the same operations eagerly. The constant factor is mostly small enough to not make a huge difference. But if the operation is very small and would only produce an immediate value (things like a machine integer, rather than a heap-allocated object), then the overhead of laziness is still only a constant factor but that constant factor is quite large relative to the "intrinsic" cost of the operation; if your program is mostly doing this kind of thing then lazy evaluation does make a significant negative difference.

    Lazy evaluation also makes it very easy to lose track of the exact order various operations will be executed in. Rather than things being done in the order you wrote the code they are done in an order determined by the data dependencies between operations; things are only executed when their results are needed. Often this "need" is determined by code that is very non local.

    In pure functional code this often doesn't matter very much, since the results of such code is purely determined by the code you wrote regardless of the order in which various things are executed. In computer science theory, analysing a simple pure lambda calculus, there is a hard mathematical proof that if any evaluation order of a program can produce a well defined result then lazy evaluation will produce that result; eager evaluation may encounter errors or infinite loops that lazy evaluation would avoid. This means you don't a pure functional programmer doesn't have to actually care very much about exactly what order things will be executed in. No matter what execution order they have in their head, if it produces a well-defined result then the actual lazy evaluation will produce the same result, even if the execution order the had in their head is different from actual lazy evaluation. (This is assuming that the language faithfully transmits the properties that were proved of a simple lambda calculus, of course)

    In code that has side effects, losing track of the order in which operations will be executed is a nightmare for the programmer. It makes it very easy to make mistakes that are incredibly hard to debug. If two pieces of code will be executed and both change a shared variable, you need to be able to easily and accurately predict the order they will run in order to know the final state of the variable. So programmers writing impure code require a pretty thorough operational understanding of the behaviour of the compiler/interpreter. For this reason you basically never see "all operations are lazy by default" in a language that allows untracked side effects; if these languages support lazy evaluation directly they typically require the programmer to explicitly opt-in to lazy evaluation for parts of their code, and trust the programmer to only do that where it is safe (i.e. where they have written pure code even though the language will not enforce this).

    So why do we want lazy evaluation at all?

    I've now made it sound like lazy evaluation is always bad. But there are some large caveats. Sometimes lazy evaluation improves performance, or allows an algorithm to work at all.

    Frequently this is when a computation makes a pass over a very large data set; lazily evaluated code might be able to process this whole data set without ever needing it all to be resident in memory at once; this can make a massive difference to performance. But sometimes also lazy evaluation just performs its operations in an order that is better for the CPU cache, garbage collector, etc, even when eager evaluation of the same code wouldn't use significantly more memory.

    Lazy evaluation also often enables more de-coupled code. Code that produces a data structure can be written in a simple direct style to produce "all" of it, even if that is infinite. Code that consumes the data structure simply examines as much of the structure as it wants to, and by examining it will cause the producer to actually run "just enough" to produce the needed data. So the amount of the data structure that is produced can be made to be exactly as much as the consumer needs, no matter how it determines that, without the producer being aware of the consumer at all.

    Under eager evaluation any data structure must be produced in its entirety before a consumer can look at any of it. If that is undesirable (because the structure is very large or takes a very long time to finish), then we need a way for the producer to only produce some of the structure. This usually then involves additional arguments to control how much is produced, may involve additional complexity in the data structure to allow the consumers to differentiate between "this is as far as we've generated so far" and "this is where the data really ends", might need the producer to be capable of resuming production from a previous partial result, etc. This can easily add a lot of complexity to code implementing a fairly simple idea, and the additional complexity often ends up coupling the producer to the consumer much more tightly than a lazy producer and consumer need to be.

    That previous discussion might have been a little abstract. As an example, consider a program that produces a move-tree for analysis of a game like chess. A lazy producer can just return a tree of every possible move in every possible position, without knowing anything about what anyone wants to do with it. It might produce a structure Move with the fields player, startingSquare, endingSquare describing the move itself, and another field followOnMoves that is simply a list of every possible Move that could occur after this one; each of those Moves will of course again contain another list of possible follow on moves, and so-on to infinity.

    If this was produced by a lazy function, the consumer can just explore the tree without having to know anything about how it was produced. Each of those fields (but most significantly followOnMoves) will not actually exist when the consumer starts running, they will just contain lazy references to the code that needs to be run to populate them, if the consumer ever actually wants to look at them. So if the consumer was doing something like minimax pruning, the producer will just automatically never waste time producing the parts of the tree that the consumer doesn't decide to look at. Several different consumers can exist that do different things with the same data structure, causing the same single producer code to generate different parts of the tree automatically. Which parts of the tree are needed can even be determined interactively by a human user! The producer and consumer implementations can be very independent from one another; basically all they share is the definition of that simple data type Move.

    An eager producer simply can't return Move tree like this as it is essentially infinite (I think under some competition rules chess technically isn't infinite as there's a limit on the number of times a position can be repeated, but the whole tree is still impractically vast). Either it has to return a small portion of the move tree (which means it needs to know what kinds of portions are useful to the consumer, essentially embedding the consumer logic into the producer), or it has to expose various functions that only performs single-steps and the consumer now is responsible for calling those single-step functions when it wants more data (essentially embedding the producer logic into the consumer).

    Either way, the two sides may have to know a lot more about the implementation of the other, in order to cooperate on the strategy for generating data as-and-when it is needed. You can design good solutions to this problem that still leave the eager producer and eager consumer reasonably decoupled, but designing a good interface that is flexible enough for all uses while still being performant can be a tricky problem, and it can happen quite a lot that the it simply isn't a problem you need to think about when your code is lazily evaluated.

    2. How to analyze the performance of lazy algorithms ?

    This part I really don't think I can sum up well.

    Basic big-O complexity analysis still works, and doesn't even change very much if the computation isn't very fundamentally using laziness. If the operations performed are exactly the same regardless, just in a different order, you can just do the same big-O analysis you would do if the code was strictly evaluated. (Big-O complexity doesn't account for effects like cache locality, extra memory for thunks, or running out of memory, of course)

    When the algorithm more fundamentally relies on laziness (and on things not being executed at all if they're not needed), then this won't work of course. But I don't think I can do that topic justice here, any more than I could explain "how to analyze the performance of eager algorithms" in a single post.

    3. What are some typical use cases of lazy evaluation ?

    This is far too broad. How would you answer "what are some typical use cases of eager evaluation?" The answer to both is really "all the typical use cases of programming in general". Everything task can be implemented by both, but some things are just done differently when you're working with eager or lazy evaluation; you would choose different algorithms to implement the task.

    However as I've mentioned above, one general thing I can say is that lazy evaluation can be particularly ergonomic in cases where an eager algorithm needs a lot more code to explicitly manage when and how much of a very large data set is in memory at once.

    Lazy evaluation is also critical for a lot of control structures, in any language. For example, if/then/else wouldn't be very useful if the then and else parts were always evaluated before you could even start executing the conditional selection logic. So almost every language has this very limited sort of "laziness" built in for a few specific parts of the syntax. But in a language where everything is lazy you can make your own control structures. In Haskell things analogous to while loops and for-each loops can be simply implemented as ordinary library code, with no need for the compiler to specially implement them. So this is another "typical use case" that stands out in comparison to eager evaluation.

    4. Does a programmer have any control over this ? Can I make lazy functions in a language which does not support lazy evaluation right outside the box ?

    If you have first-class functions (or other features that can simulate them) then you can always simulate lazy evaluation. Instead of relying on the runtime system implicitly creating a thunk (which is what we call the in-memory record of an operation that will be run later when required), you can just explicitly store a function that will generate the value later and explicitly call it when required. It takes a little more finesse to ensure that such a function is only ever run to produce the value once, no matter how many references there may be - but that too can be done. Some languages even have enough flexibility to wrap all this up in an interface that makes it look like you're just using values as normal, keeping the thunk-functions under the hood.

    Languages with lazy-by-default evaluation also typically allow the programmer to explicitly make certain things eager. A lazy language aiming for good performance will also often have an optimising compiler that aims to detect when an operation does not benefit from laziness and perform it eagerly instead. Haskell, for example, promises you a non-strict semantics by default, and we usually think of it as using lazy evaluation to achieve that, but it actually does a lot of optimisation and will evaluate lots of your code eagerly; it just promises not to do so where that could change the result of your code, and tries not to do it where it would make your code slower.

    So whether you're working in a lazy-by-default language or an eager-by-default language, you would have some ability to opt-in to the other evaluation strategy (though with varying amounts of effort required).