Search code examples
c++algorithmrecursionpath-findingbacktracking

Optimisation of the backtracking algorithm for all paths in a grid from top left to bottom right visiting each square just once


The problem statement: Calculate the number of paths in an nxn grid from the upper-left corner to the lower-right corner such that the path visits each square exactly once. For example, in a 7£7 grid, there are 111712 such paths.

The algorithm and optimisation: We begin with a straightforward backtracking algorithm, and then optimize it step by step using observations of how the search can be pruned.

This is the part I don't understand:

Optimisation 1: In any solution, we first move one step down or right. There are always two paths that are symmetric about the diagonal of the grid after the first step. Hence, we can decide that we always first move one step down (or right), and finally multiply the number of solutions by two.

I understand that there will be symmetric solutions so we can just find half of them and multiply the number by 2. But how would we implement that?


What does this sentence mean?:

Hence, we can decide that we always first move one step down (or right)

Does it mean that the first move will always be down (or right)? Basically regulating the first move for the base recursion? Or does it mean doing that for each recursive step? Or does that mean something totally different. I'm having a lot of trouble with understanding. Kindly explain in detail.


Solution

  • For every path (eg "RDDRRD") there is a mirror path where you swap D and R ("eg DRRDDR"). The text is saying that if you find all paths that start with D, you can trivially generate all paths that start with R by performing the inversion.

    Thus, you can simply assume the first letter is D and multiply the resulting number of paths by 2.

    For the solution involving dynamic programming this will not make much of a difference, but for a naive backtracking solution this is a factor 2!