Search code examples
algorithmrecursioncalldynamic-programmingmemoization

If memoization is top-down depth-first, and DP is bottom-up breadth-first, what are the top-down breadth-first / bottom-up depth-first equivalents?


I just read this short post about mental models for Recursive Memoization vs Dynamic Programming, written by professor Krishnamurthi. In it, Krishnamurthi represents memoization's top-down structure as a recursion tree, and DP's bottom-up structure as a DAG where the source vertices are the first – likely smallest – subproblems solved, and the sink vertex is the final computation (essentially the graph is the same as the aforementioned recursive tree, but with all the edges flipped). Fair enough, that makes perfect sense.

Anyways, towards the end he gives a mental exercise to the reader:

Memoization is an optimization of a top-down, depth-first computation for an answer. DP is an optimization of a bottom-up, breadth-first computation for an answer.

We should naturally ask, what about

  • top-down, breadth-first
  • bottom-up, depth-first

Where do they fit into the space of techniques for avoiding recomputation by trading off space for time?

  • Do we already have names for them? If so, what?, or
  • Have we been missing one or two important tricks?, or
  • Is there a reason we don't have names for these?

However, he stops there, without giving his thoughts on these questions.


I'm lost, but here goes:

My interpretation is that a top-down, breadth-first computation would require a separate process for each function call. A bottom-up, depth-first approach would somehow piece together the final solution, as each trace reaches the "sink vertex". The solution would eventually "add up" to the right answer once all calls are made.

How off am I? Does anyone know the answer to his three questions?


Solution

  • @AndyG's comment is pretty much on the point here. I also like @shebang's answer, but here's one that directly answers these questions in this context, not through reduction to another problem.

    It's just not clear what a top-down, breadth-first solution would look like. But even if you somehow paused the computation to not do any sub-computations (one could imagine various continuation-based schemes that might enable this), there would be no point to doing so, because there would be sharing of sub-problems.

    Likewise, it's unclear that a bottom-up, depth-first solution could solve the problem at all. If you proceed bottom-up but charge all the way up some spine of the computation, but the other sub-problems' solutions aren't already ready and lying in wait, then you'd be computing garbage.

    Therefore, top-down, breadth-first offers no benefit, while bottom-up, depth-first doesn't even offer a solution.

    Incidentally, a more up-to-date version of the above blog post is now a section in my text (this is the 2014 edition; expect updates.