I've got the following Haskell function:
apply_to_f :: (a -> a) -> ((a -> a) -> b) -> ((b -> c) -> c)
apply_to_f f = \ g -> \ h -> h (g f)
(This comes from the 'pred' function for Church numerals).
There's a constraint that I want to tell the compiler about: it is possible to compose (apply_to_f f)
with itself.
In other words, (apply_to_f f)
takes a function of type (a -> a) -> a
and it also returns a function of type (a -> a) -> a
.
So is it possible to tell the compiler that it needs to unify the types (a -> a) -> b
and (b -> c) -> c
in the type of apply_to_f
, or do I have to figure that out myself if I want to write down the precise type of the function?
My initial guess was that I might be able to write a vague type for apply_to_f
, using a type variable to indicate this constraint. But that doesn't seem to work.
There are a few possibilities. First, you can express the extra constraint as a type equality:
apply_to_f :: ((a -> a) -> b) ~ ((b -> c) -> c)
=> (a -> a) -> ((a -> a) -> b) -> ((b -> c) -> c)
Note that the resulting type signature is not precisely equivalent to:
apply_to_f :: (a -> a) -> ((a -> a) -> a) -> ((a -> a) -> a)
In particular, GHCi will report different types for these two type signatures, and the error reported if apply_to_f
is used at the "wrong type" will be different. For example, the following code type checks:
apply_to_f :: (a -> a) -> ((a -> a) -> b) -> ((b -> c) -> c)
apply_to_f f = \ g -> \ h -> h (g f)
foo = apply_to_f id id
If you restrict the type of apply_to_f
with a direct type signature:
apply_to_f :: (a -> a) -> ((a -> a) -> a) -> ((a -> a) -> a)
the resulting type error will be associated with the second id
in the definition of foo
and will be accompanied by a clear explanation of the expected version actual types for this second id
:
Unify.hs:13:21-22: error:
• Couldn't match type ‘a’ with ‘a -> a’
Expected: (a -> a) -> a
Actual: (a -> a) -> a -> a
‘a’ is a rigid type variable bound by
the inferred type of foo :: (a -> a) -> a
at Unify.hs:13:1-22
• In the second argument of ‘apply_to_f’, namely ‘id’
In the expression: apply_to_f id id
In an equation for ‘foo’: foo = apply_to_f id id
• Relevant bindings include
foo :: (a -> a) -> a (bound at Unify.hs:13:1)
|
13 | foo = apply_to_f id id
| ^^
while if you use a constraint instead:
apply_to_f :: ((a -> a) -> b) ~ ((b -> c) -> c)
=> (a -> a) -> ((a -> a) -> b) -> ((b -> c) -> c)
then the type error will be associated with the use of apply_to_f
in the definition of foo
and will be more vaguely associated with "some problem in foo's inferred type":
Unify.hs:12:7-16: error:
• Couldn't match type ‘a’ with ‘a -> a’
arising from a use of ‘apply_to_f’
‘a’ is a rigid type variable bound by
the inferred type of foo :: ((a -> a) -> a -> a) -> a -> a
at Unify.hs:12:1-22
• In the expression: apply_to_f id id
In an equation for ‘foo’: foo = apply_to_f id id
• Relevant bindings include
foo :: ((a -> a) -> a -> a) -> a -> a (bound at Unify.hs:12:1)
|
12 | foo = apply_to_f id id
| ^^^^^^^^^^
Nonetheless, I think the differences between these two signatures are probably cosmetic in most cases, and they should behave about the same for practical purposes.
Second, you can leverage the type checker to calculate the desired type for apply_to_f
by forcing the type checker to type check expressions that require the constraint. For example, the definition:
apply_to_f = general_apply_to_f
where general_apply_to_f :: (a -> a) -> ((a -> a) -> b) -> ((b -> c) -> c)
general_apply_to_f f = \ g -> \ h -> h (g f)
_ = apply_to_f undefined . apply_to_f undefined
will result in the type of apply_to_f
being correctly inferred as:
λ> :t apply_to_f
apply_to_f :: (a -> a) -> ((a -> a) -> a) -> (a -> a) -> a