Search code examples
haskellfunctor

Partially applied function type (a ->) as Functor instance in Haskell


I'm going through the "Programming in Haskell" book (second edition) and just stumbled across exercise 2, chapter 12, part 2:

instance Functor ((->) a) where
  fmap = TODO

where the answer is:

instance Functor ((->) a) where
  fmap = (.)

It got me scratching my head for a good while. I guess it does make sense to me on an intuitive level (partially applied function type a -> is a functor when composition is its fmap), but I think some good examples would solidify my understanding of the exercise.

I came up with these two:

main = do
    putStrLn . show $ (fmap (+1) (*2)) (5 :: Int)
    putStrLn . show $ (fmap (show) (+1)) 3

Do my examples correctly illustrate the exercise?

fmap given two arguments:

  • partially applied function (the function)
  • another partially applied function (the functor)

UPDATE

fmap given two arguments:

  • function (the function)
  • another function (the functor)

just looks strange to me and I'm not sure I got the concept right.

I see some similar questions on SO (like e.g. this one) where this one is almost what I'm looking for, but not quite (I'm just looking for examples of functors and nothing else - no applicatives and no monads).


Solution

  • There's really nothing more to it than that, for a functor f, the implementation of fmap (there is known to be at most one implementation for any possible f) has to have type (a -> b) -> f a -> f b, and satisfy the two functor laws:

    fmap id = id
    fmap (g . h) = fmap g . fmap h
    

    When f is the type constructor (->) r - that is when f a means r -> a - then the needed type signature is:

    (a -> b) -> (r -> a) -> (r -> b)
    

    (the last pair of parentheses there is unnecessary, but I've left them in because it makes the "pattern" easier to see), which it is easy to see is exactly the signature of the (.) operator.

    As for the two laws, it's pretty obvious that they must hold when you write down what they're saying. I'll prove them by writing everything out in painstaking detail:

    fmap id = (.) id
            = \g -> id . g
            = \g -> (\a -> id (g a))
            = \g -> (\a -> g a)
            = \g -> g
            = id
    

    and

    fmap (g . h) = (.) (g . h)
                 = \g1 -> (g . h) . g1
                 = \g1 -> \a -> ((g . h) . g1) a
                 = \g1 -> \a -> g (h (g1 a))
    
    (fmap g) . (fmap h) = ((.) g) . ((.) h)
                        = \g1 -> ((.) g) (h . g1)
                        = \g1 -> g . h . g1
                        = \g1 -> \a -> g (h (g1 a))
    

    so those are also the same.

    (Don't worry too much about that last derivation - often such things might seem hard to follow the logic of how to get from one line to the next, even though here they're all basically using the definition of composition. It's really just expression the obvious and well-known fact that function composition is associative. And in any case, it's a general result that, except I believe for certain pathological types, if the first functor law is satisfied then the second one will always be automatically satisfied.)

    The important thing is: when f is defined as f a = r -> a, then the composition operator has the same type as fmap, and satisfies both functor laws - therefore composition is a legal definition (and the only such definition) of fmap to make a Functor instance for f. There's really nothing more to in than that, at least formally.