I've been reading about making our own types and type classes from Learn You a Haskell for Great Good. I don't quite understand the behaviour of the Haskell compiler when I add a type constraint to my data
declarations.
For example, I have
{-# LANGUAGE DatatypeContexts #-}
data (Ord a) => OrderedValue a = OrderedValue a
getOrderedValue :: OrderedValue a -> a
getOrderedValue (OrderedValue a) = a
As can be seen above, I have a type constraint on my data declaration, stating that any value that is contained inside an OrderedValue
must have an Ord
instance.
I tried compiling this code and the compiler spit out
• No instance for (Ord a) arising from a use of ‘OrderedValue’
Possible fix:
add (Ord a) to the context of
the type signature for:
getOrderedValue :: forall a. OrderedValue a -> a
• In the pattern: OrderedValue a
In an equation for ‘getOrderedValue’:
getOrderedValue (OrderedValue a) = a
Changing the definition of getOrderedValue
to
getOrderedValue :: (Ord a) => OrderedValue a -> a
getOrderedValue (OrderedValue a) = a
expectedly fixed the problem.
My question is - why is the compiler complaining here? I assumed the compiler should be able to infer that the a
being pattern matched in
getOrderedValue (OrderedValue a) = a
has an Ord
instance, because the OrderedValue
value constructor is for constructing instances of type OrderedValue
that has a type parameter a
that has an Ord
instance.
Phew, that was a mouthful.
Thanks.
EDIT - I had a look at the alternate answers that @melpomene suggested, thanks for that. However, I'm looking for an answer that describes why the Haskell language designers chose to implement it this way.
Ideed the compiler can infer it: try to remove the function type signature and see.
For example, with ghci
:
Prelude> :set -XDatatypeContexts
Prelude> data Ord a => OrderedValue a = OrderedValue a
Prelude> let getOrderedValue (OrderedValue a) = a
Prelude> :t getOrderedValue
getOrderedValue :: Ord t => OrderedValue t -> t
However, by explicitly stating a type for your function, you are disabling any inference. Type inference kicks in when there is no explicit type annotation for a term. If you explicitly give it, that is the type the term will have no matter what, so it has to be correct.
So, the compiler here is type-checking your code, and finds that a type such as OrderedValue a -> a
is wrong, because it ignores the Ord a
constraint.
However, note that DatatypeContexts
, which allows you to put the constraint on the data type, is deprecated. The consensus about it is that it is better to put constraints only on the functions that really need them. If you remove the Ord a
constraint from the data type, then the getOrderedValue
function compiles well, because there would be no Ord a
constraint to obey, and indeed you can notice that this matches more closely the intuition behind the original type you wrote, which now is correct.
As an aside, note that the type that the compiler infers automatically is the most general one for the particular body of the function to correctly type-check, but you can explicitly give a less general type.
For example, the following function:
mysum = foldr (+) 0
has type (Num b, Foldable t) => t b -> b
, but you can write:
mysum :: [Integer] -> Integer
mysum = foldr (+) 0
because that's a correct type for the function, albeit a more specific one. Of course, then once the specific type is assigned to mysum
, you cannot call it with another instance of Foldable
or another Num
type: you loose the generality when you specialize the type.