Search code examples
haskellhaskell-lenslenses

Type error when messing up with lenses


I am encountering a type error when trying to compile some lens code.

{-# LANGUAGE TemplateHaskell #-}

import Control.Lens.Setter (over)
import Control.Lens.Getter (view)
import Control.Lens.TH

data IL = IL {
  _ilL :: [Int]
}
  deriving (Show)

makeLenses ''IL

val = IL [1, 2, 3]

(val1, val2) = let l = ilL
       in  (over l tail val, view l val)

The error is:

Test.hs:17:35:
    Couldn't match expected type `Control.Lens.Internal.Getter.Accessor [Int] [Int]'
                with actual type `Control.Lens.Internal.Setter.Mutator [Int]'
    Expected type: Control.Lens.Getter.Getting [Int] s0 [Int]
      Actual type: ([Int]
                    -> Control.Lens.Internal.Setter.Mutator [Int])
                   -> IL -> Control.Lens.Internal.Setter.Mutator IL
    In the first argument of `view', namely `l'
    In the expression: view l val

This error goes away if I use ilL directly. However, I really need to make a lens definition using let. How can I solve this?


Solution

  • Yet another case of the dreaded monomorphism restriction. Add {-# LANGUAGE NoMonormorphismRestriction #-} at the top of your file and it will compile just fine.

    The reason is because when you do let l = ilL in ... without an explicit type signature (or the MR disabled) GHC wants to specialize the type of l as much as it can. It first encounters its use in over l tail val and specializes to the type needed there, but this conflicts with the specialized inferred type in view l val. The solution is to disable the MR or add explicit type signatures like

    (val1, val2) =
        let l :: Lens' IL [Int]
            l = ilL
        in  (over l tail val, view l val)
    

    This is very similar to a more simple case like

    x = let y = 1
            z = 2 :: Int
            w = 3 :: Double
        in (z + y, w + y)
    

    What should the type of y be? With the MR the compiler wants to restrict the type of y to be a single type, but we would really like it to have the type Num a => a since this can work with Ints or Doubles. With the MR turned off the compiler won't specialize the type of y and everything works as expected. The alternative is to give y an explicit type signature, but why do all that work when we can make the compiler do it for us?