Is there a way to model exception handling in lambda calculus?
I'm asking this because it is very common to see multiple ways in handling exception states in procedural languages and derivative paradigms. Even in C you can emulate this behaviour quite simply using setjmp.h
, errno.h
and signal.h
.
I can imagine in my head a way in the Turing machine state graph a sort of "exception" node that can be reached by any other node, and I guess procedural languages do this sort of thing to implement it.
But in Haskell (my newest obsession) and functional programming in general you don't see all the fuzz about exceptions. I know they exist (Control.Exception
?) and I know that functional programming uses monads to model side-effects, but I confess I don't quite understand what it is and how it works.
But i saw in another discussion that all functional languages are kind of syntactic sugar to lambda calculus, so how would such a thing work?
First note: If you're starting out in Haskell, stay away from Control.Exception
. That module emulates traditional Java-style exception handling inside the IO
monad. It's a good thing to learn about eventually, but it's not the conventional way of dealing with (controllable) errors within purely-functional Haskell code.
As you've already noted, the usual way to handle exceptional situations in a functional language like Haskell is with monads. So let's talk about monads, but instead of talking in the abstract, let's talk about one specific one. This type exists in the standard library, but I'm calling it by another, more evocative name.
data Result e a = Err e | Ok a
The way we're going to use this type is simple. Anytime we have a function that could "throw" an exception, that function returns a Result e a
, where a
is the result type if everything goes well and e
is the type of exception we throw. It's sort of like checked exceptions in Java but on a much more granular scale.
As a perhaps stereotypical example, consider a function which divides two numbers but errs out in Result
if we attempt to divide by zero. Its signature would be something like this.
-- Singleton error type
data DivideByZero = DivideByZero
divide :: Double -> Double -> Result DivideByZero Double
divide _ 0 = Err DivideByZero
divide x y = Ok (x / y) -- Built-in Haskell division
So if we ever find ourselves in a situation where we have a Result e a
, that's a value of type a
, with the major caveat that we might have already failed and the Result
object "contains" an exception of type e
.
Now let's see what operations a Monad
instance would give us. The hierarchy for monads starts at Functor
, then moves down to Applicative
, then finally Monad
. Functor
gives us
fmap :: (a -> b) -> Result e a -> Result e b
We can read this as "If I have a way to go from a
to b
(that cannot fail), then I can always take a (potentially-failed) a
and produce a (potentially-failed) b
". This makes sense, and we can write it easily.
fmap f (Ok a) = Ok (f a)
fmap _ (Err e) = Err e
We simply preserve the error state. If the computation already erred, we keep the error. If not, we apply our (perfectly safe, pure) function f
.
Now a natural question arises: What if I have a function of two arguments f :: a -> b -> c
, but I have x :: Result e a
and y :: Result e b
? We can try to curry the function, but once we apply our first argument, we have a problem.
fmap f x :: Result e (b -> c)
y :: Result e b
Now I want to apply a function which might have failed to a value which might have failed. Put another way, I have two potentially-error objects and I want to combine them (in this case, using function application). That's where Applicative
comes in.
(<*>) :: Result e (a -> b) -> Result e a -> Result e b
The Applicative
operator (<*>)
is just fmap
with a function that's already inside of our applicative (in this case, a function that might secretly be an exception).
Here's how it's implemented.
Err e <*> _ = Err e
Ok _ <*> Err e = Err e
Ok f <*> Ok x = Ok (f x)
Now we can apply our example function as
fmap f x <*> y :: Result e c
Additionally, Applicative
gives us pure :: a -> Result e a
which is just implemented as pure = Ok
for our type. This allows us to treat a pure value as a potentially-failing value, essentially "forgetting" the fact that it can't fail. It comes in handy when you're trying to make type signatures line up, such as if you have a function that's expecting a potentially-failing computation but all you have is a pure one.
Finally, we get to Monad
. Monad
only requires one function to be implemented, and that's the bind operator (>>=)
.
(>>=) :: Result e a -> (a -> Result e b) -> Result e b
Okay, that looks pretty different. Let's flip the arguments.
fmap :: (a -> b) -> Result e a -> Result e b
(<*>) :: Result e (a -> b) -> Result e a -> Result e b
flip (>>=) :: (a -> Result e b) -> Result e a -> Result e b
We've just moved the Result
part of the function over a bit. Now, not only can our function have failed, but our function can choose whether or not it fails based on the input a
value from earlier in our computation.
This is amazingly powerful, because it allows us to implement the most common thing we do with exceptions in an imperative language: propagate them.
First, let's see how it's implemented.
Err e >>= _ = Err e
Ok x >>= f = f x
Now, anytime we call a function and want to simply let the exception propagate to our own caller, we use >>=
on the result of the function.
myComplicatedComputation :: Int -> Int -> Int -> Result ExceptionType Bool
myComplicatedComputation a b c =
someMath a b >>= (\tmp ->
someMoreMath tmp c >>= (\result ->
pure (result > 0)))
In Java, we would write this as
public static boolean myComplicatedComputation(int a, int b, int c) throws ExceptionType {
int tmp = someMath(a, b);
int result = someMoreMath(tmp, c);
return (result > 0);
}
It sure would be nice if we could write our Haskell code in a neat way like this that lets us almost forget we're dealing with effects at all. Let me indent this a little differently and remove some unnecessary parentheses.
myComplicatedComputation :: Int -> Int -> Int -> Result ExceptionType Bool
myComplicatedComputation a b c =
someMath a b >>= \tmp ->
someMoreMath tmp c >>= \result ->
pure (result > 0)
Introduce do
notation. It's very important to keep in mind that do
notation is pure syntax sugar. At no point does it add anything new to the language. It's implemented entirely in terms of (>>=)
and the other Monad
operations. This is exactly equivalent to the above.
myComplicatedComputation :: Int -> Int -> Int -> Result ExceptionType Bool
myComplicatedComputation a b c = do
tmp <- someMath a b
result <- someMoreMath tmp c
pure $ result > 0
So it's all implemented in terms of functions. Rather than "applying" a function, we use this funny sort of function-application-like operator called "bind" and written (>>=)
. As the old saying goes, it's functions all the way down.
It's worth noting that this is exactly what Rust does as well. Rust takes a monadic approach to error handling with its own Result<T, E>
type. fmap
is .map
, pure
is Ok
, and (>>=)
is .and_then
. (There's no direct equivalent to (<*>)
, but it can be implemented in terms of .and_then
easily) And the ?
operator in Rust, which basically means "propagate errors to my caller", is really just a shortcut to short-circuit out if you have an error. It's Rust's answer to Haskell's do
notation.