All the monad articles often state, that monads allow you to sequence effects in order.
But what about simple composition? Ain't
f x = x + 1
g x = x * 2
result = f g x
requires g x
to be computed before f ...
?
Do monads do the same but with handling of effects?
Disclaimer: Monads are a lot of things. They are notoriously difficult to explain, so I will not attempt to explain what monads are in general here, since the question does not ask for that. I will assume you have a basic grasp on what the Monad
interface is as well as how it works for some useful datatypes, like Maybe
, Either
, and IO
.
Your question begins with a note:
All the monad articles often state, that monads allow you to sequence effects in order.
Hmm. This is interesting. In fact, it is interesting for a couple reasons, one of which you have identified: it implies that monads let you create some sort of sequencing. That’s true, but it’s only part of the picture: it also states that sequencing happens on effects.
Here’s the thing, though… what is an “effect”? Is adding two numbers together an effect? Under most definitions, the answer would be no. What about printing something to stdout, is that an effect? In that case, I think most people would agree that the answer is yes. However, consider something more subtle: is short-circuiting a computation by producing Nothing
an effect?
Let’s take a look at an example. Consider the following code:
> do x <- Just 1
y <- Nothing
return (x + y)
Nothing
The second line of that example “short-circuits” due to the Monad
instance for Maybe
. Could that be considered an effect? In some sense, I think so, since it’s non-local, but in another sense, probably not. After all, if the x <- Just 1
or y <- Nothing
lines are swapped, the result is still the same, so the ordering doesn’t matter.
However, consider a slightly more complex example that uses Either
instead of Maybe
:
> do x <- Left "x failed"
y <- Left "y failed"
return (x + y)
Left "x failed"
Now this is more interesting. If you swap the first two lines now, you get a different result! Still, is this a representation of an “effect” like the ones you allude to in your question? After all, it’s just a bunch of function calls. As you know, do
notation is just an alternative syntax for a bunch of uses of the >>=
operator, so we can expand it out:
> Left "x failed" >>= \x ->
Left "y failed" >>= \y ->
return (x + y)
Left "x failed"
We can even replace the >>=
operator with the Either
-specific definition to get rid of monads entirely:
> case Left "x failed" of
Right x -> case Left "y failed" of
Right y -> Right (x + y)
Left e -> Left e
Left e -> Left e
Left "x failed"
Therefore, it’s clear that monads do impose some sort of sequencing, but this is not because they are monads and monads are magic, it’s just because they happen to enable a style of programming that looks more impure than Haskell typically allows.
But perhaps that is unsatisfying to you. Error handling is not compelling because it is simply short-circuiting, it doesn’t actually have any sequencing in the result! Well, if we reach for some slightly more complex types, we can do that. For example, consider the Writer
type, which allows a sort of “logging” using the monadic interface:
> execWriter $ do
tell "hello"
tell " "
tell "world"
"hello world"
This is even more interesting than before, since now the result of each computation in the do
block is unused, but it still affects the output! This is clearly side-effectful, and order is clearly very important! If we reorder the tell
expressions, we would get a very different result:
> execWriter $ do
tell " "
tell "world"
tell "hello"
" worldhello"
But how is this possible? Well, again, we can rewrite it to avoid do
notation:
execWriter (
tell "hello" >>= \_ ->
tell " " >>= \_ ->
tell "world")
We could inline the definition of >>=
again for Writer
, but it’s too long to be terribly illustrative here. The point is, though, that Writer
is just a completely ordinary Haskell datatype that doesn’t do any I/O or anything like that, and yet we have used the monadic interface to create something that looks like ordered effects.
We can go even further by creating an interface that looks like mutable state using the State
type:
> flip execState 0 $ do
modify (+ 3)
modify (* 2)
6
Once again, if we reorder the expressions, we get a different result:
> flip execState 0 $ do
modify (* 2)
modify (+ 3)
3
Clearly, monads are a useful tool for creating interfaces that look stateful and have a well-defined ordering despite actually just being ordinary function calls.
What gives monads this power? Well, they’re not magic—they’re just ordinary pure Haskell code. But consider the type signature for >>=
:
(>>=) :: Monad m => m a -> (a -> m b) -> m b
Notice how the second argument depends on a
, and the only way to get an a
is from the first argument? This means that >>=
needs to “run” the first argument to produce a value before it can apply the second argument. This doesn’t have to do with evaluation order so much as it has to do with actually writing code that will typecheck.
Now, it’s true that Haskell is a lazy language. But Haskell’s laziness doesn’t really matter for any of this because all of this code is actually pure, even the example using State
! It’s simply a pattern that encodes computations that look sort of stateful in a pure way, but if you actually implemented State
yourself, you’d find that it just passes around the “current state” in the definition of the >>=
function. There’s not any actual mutation.
And that’s it. Monads, by virtue of their interface, impose an ordering on how their arguments may be evaluated, and instances of Monad
exploit that to make stateful-looking interfaces. You don’t need Monad
to have evaluation ordering, though, as you found; obviously in (1 + 2) * 3
the addition will be evaluated before the multiplication.
IO
??Okay, you got me. Here’s the problem: IO
is magic.
Monads are not magic, but IO
is. All of the above examples are purely functional, but obviously reading a file or writing to stdout is not pure. So how the heck does IO
work?
Well, IO
is implemented by the GHC runtime, and you could not write it yourself. However, in order to make it work nicely with the rest of Haskell, there needs to be a well-defined evaluation order! Otherwise things would get printed out in the wrong order and all sorts of other hell would break loose.
Well, it turns out the Monad
’s interface is a great way to ensure that evaluation order is predictable, since it works for pure code already. So IO
leverages the same interface to guarantee the evaluation order is the same, and the runtime actually defines what that evaluation means.
However, don’t be misled! You don’t need monads to do I/O in a pure language, and you don’t need IO
to have monadic effects. Early versions of Haskell experimented with a non-monadic way to do I/O, and the other parts of this answer explain how you can have pure monadic effects. Remember that monads are not special or holy, they’re just a pattern that Haskell programmers have found useful because of its various properties.