Search code examples
haskellrecursionmemoization

Dynamic Programming (Haskell, Hofstader M/F sequence)


This works :

f :: Int -> Int
f n = gof n where
      gof 0 = 1
      gof i = i - ms!! ( fs!! (i-1) )
      gom 0 = 0
      gom i = i - fs!! ( ms!! (i-1) )
      fs = [gof j | j <- [0..n]]
      ms = [gom j | j <- [0..n]]

m n = gom n where
      gof 0 = 1
      gof i = i - ms!! ( fs!! (i-1) )
      gom 0 = 0
      gom i = i - fs!! ( ms!! (i-1) )
      fs = [gof j | j <- [0..n]]
      ms = [gom j | j <- [0..n]]

However it is really repetitive. Is there a way to avoid just repeating those chunks of code? A few references, this is an adaptation of :

http://jelv.is/blog/Lazy-Dynamic-Programming/

Sequence ref :

https://en.wikipedia.org/wiki/Hofstadter_sequence

I checked it against the numbers :

https://oeis.org/A005378 https://oeis.org/A005379

It generates the right numbers and it is way faster than the basic code which won't go very high at all before it starts having issues with recursion depth.


Solution

  • First of all, you can pattern-match in a top-level binding. Usually it doesn't mean much interesting is going on, but if you want to share local helpers between two top-level bindings, it can help.

    m2 :: Int -> Int
    f2 :: Int -> Int
    (m2, f2) = (gom, gof)
      where
        gof 0 = 1
        gof i = i - ms !! ( fs !! (i-1) )
        gom 0 = 0
        gom i = i - fs !! ( ms !! (i-1) )
        fs = map gof [0..]
        ms = map gom [0..]
    

    You'll note there's one other trick in there. Instead of bounding the lists fs and ms to their maximum size, I just let laziness handle bounding them. The lists won't be created past where they're needed to memoize earlier results.

    But list indexing is O(n). Getting rid of even some of it can be a significant speedup. If you look at the pattern of recursion along the same function, you'll see that gom i always calls gom (i-1), and the same with gof. You can use that to remove list indexing on those lookups by just passing along the previous value. Unfortunately, the same doesn't apply with the calls to the opposite function, as they don't follow so easily. But it's still removing a big amount of work. And it can be done in such a way as to utilize laziness even further:

    m3, f3 :: Int -> Int
    (m3, f3) = ((ms !!), (fs !!))
      where
        (ms, fs) = unzip pairs
        pairs = (0, 1) : zipWith iter [1..] pairs
        iter i (mp, fp) = (i - fs !! mp, i - ms !! fp)
    

    The recursive helper functions have been replaced with simultaneous lazy creation of both result lists. This pattern differs from standard recursion in that it doesn't need a base case to reach, and it requires some sort of guard against trying to immediately find a base case before the complete answer can be provided. This pattern is known as co-recursion. (Or corecursion if I'm typing lazily.) Same idea, but it produces the answer in the opposite direction.