Search code examples
performancealgorithmsearchoptimizationmagic-square

Puzzle Programming - Impossible to optimize?


I've been writing programs to solve various number puzzles, but I'm constantly designing unreasonably complex search algorithms that I can't optimize.

For example, in one puzzle, you are given a 3x3 grid of numbers 1 through 9 below:

123
456
789

You're allowed to cycle the numbers in any row or column in any direction. Below is an example of shifting the top row of numbers to the right. The numbers will loop if they are at the edge of the grid.

123 -> 312
456    456
789    789

You must move the numbers in this manner until you create a magic square in which the sum of the numbers in each column, row, and diagonal is 15.

I've written a DFS brute-force algorithm to test all possible sequences of moves, though the number of available moves at each turn increases exponentially (approx. 12 ^ [current turn]), rendering it useless.

It seems a BFS would be optimal to find the correct moves, but that would require me to store hundreds if not thousands of copies of the grid in order to backtrack!


I run into these kinds of problems all the time. Both BFS and DFS algorithms use too much memory and time, respectively. I need help optimizing algorithms like these so they run faster and efficiently. Maybe recognizing patterns and relations of the numbers or giving the algorithm logic to work towards a goal would help? (I don't know what that would entail).

EDIT:

My fixed algorithm works like a charm. Learning how to number my permutations was essential. Thank you all!


Solution

  • I'd suggest looking up memoization (caching the results of a function call based on the inputs so that the function is not recomputed for identical subsequent calls). Having understood memoization, I'd look up dynamic programming (still saving the results of the function, but also reordering computations to eliminate unnecessary calls). Some explanations of dynamic programming use a recursive definition of fibonacci, then fibonacci + memoization, and finish with computation reordering.

    For DFS and BFS problems in general, the technique known as Branch and Bound might be of interest. The bounding part can give you substantial gains in some problems. Trimming a subtree one generation higher than with a less sophisticated bound eliminates many new branches in your search tree (alternate wording: since trees grow exponentially with depth, pruning your search early is important).

    For your particular problem, I believe that optimizations are possible.

    First, let's consider the DFS. I believe that all permutations of your board are reachable from any configuration of the board. As a consequence. DFS can be implemented without backtracking (though I'm guessing you knew that). Depth only search? (EDIT: per Daniel Fischer, this is wrong. Half of states are reachable, though it doesn't impact the no-backtracking statement since backtracking won't help you reach your unreachable states)

    But, you might find that you don't want to move through many permutations simply to find that you haven't yet solved the problem. Backtracking might actually help. Or...

    Consider looking at your end goal. Magic squares have some particular properties that you might exploit to choose your operations more carefully. For example, since the rows and columns must sum to 15, you know 9, 8, and 7 cannot share a row or a column with each other. Neither can 9 and 6. 6 can go with 8 and 1 or 7 and 2. 6 cannot share a column/row with 5 and 4 even though they sum to 15 because of the pigeon-hole principle (each row/column contains either 9, 8, or 7). In fact, you might find that your problem has a unique solution, modulo some sort of cyclic permutation in all-rows, all-columns, reflection, and transposition. The diagonal requirement further constrains the valid solutions.

    Aside: the logic used in the previous paragraph is not unlike constraint-based-programming. It's not really an optimization technique (though it might be considered an optimization on implementation time if not run time), but might be of interest to you as well (also note that magic squares and sudoku are frequently used to illustrate constraint-based programming).

    Now you have a goal:

    1. Describe a solution state.
    2. Reach one of the known solution states with the fewest moves.

    This is a fundamentally different approach than searching the various permutations until the problem is solved. I'd try to find a dynamic programming solution. For a slightly easier dynamic programming problem that moves from a start state to a goal state with incremental operations, take a look at the Levenshtein edit distance problem.