Most data processing can be envisioned as a pipeline of components, the output of one feeding into the input of another. A typical processing pipeline is:
reader | handler | writer
As a foil for starting this discussion, let's consider an object-oriented implementation of this pipeline where each segment is an object. The handler
object contains references to both the reader
and writer
objects and has a run
method which looks like:
define handler.run:
while (reader.has_next) {
data = reader.next
output = ...some function of data...
writer.put(output)
}
Schematically the dependencies are:
reader <- handler -> writer
Now suppose I want to interpose a new pipeline segment between the reader and the handler:
reader | tweaker | handler | writer
Again, in this OO implementation, tweaker
would be a wrapper around the reader
object, and the tweaker
methods might look something like (in some pseudo-imperative code):
define tweaker.has_next:
return reader.has_next
define tweaker.next:
value = reader.next
result = ...some function of value...
return result
I'm finding that this is not a very composable abstraction. Some issues are:
tweaker
can only be used on the left hand side of handler
, i.e. I can't use the above implementation of tweaker
to form this pipeline:
reader | handler | tweaker | writer
I'd like to exploit the associative property of pipelines, so that this pipeline:
reader | handler | writer
could be expressed as:
reader | p
where p
is the pipeline handler | writer
. In this OO implementation I would have to partially instantiate the handler
object
I'm looking for a framework (not necessarily OO) for creating data processing pipelines which addresses these issues.
I've tagged this with Haskell
and functional programming
because I feel functional programming concepts might be useful here.
As a goal, it would be nice to be able to create a pipeline like this:
handler1
/ \
reader | partition writer
\ /
handler2
For some perspective, Unix shell pipes solves a lot of these problems with the following implementation decisions:
Pipeline components run asynchronously in separate processes
Pipe objects mediate passing data between "pushers" and "pullers"; i.e. they block writers which write data too fast and readers who try to read too fast.
You use special connectors <
and >
to connect passive components (i.e. files) to the pipeline
I am especially interested in approaches which do not use threading or message-passing among agents. Maybe that's the best way to do this, but I'd like to avoid threading if possible.
Thanks!
Yeah, arrows are almost surely your man.
I suspect that you are fairly new to Haskell, just based on the kinds of things you are saying in your question. Arrows will probably seem fairly abstract, especially if what you are looking for is a "framework". I know it took me a while to really grok what was going on with arrows.
So you may look at that page and say "yes, that looks like what I want", and then find yourself rather lost as to how to begin to use arrows to solve the problem. So here is a little bit of guidance so you know what you are looking at.
Arrows will not solve your problem. Instead, they give you a language you can use in which you phrase your problem. You may find that some predefined arrow will do the job -- some kleisli arrow maybe -- but at the end of the day you are going to want to implement an arrow (the predefined ones just give you easy ways to implement them) which expresses what you mean by a "data processor". As a almost trivial example, let's say you want to implement your data processors by simple functions. You would write:
newtype Proc a b = Proc { unProc :: a -> b }
-- I believe Arrow has recently become a subclass of Category, so assuming that.
instance Category Proc where
id = Proc (\x -> x)
Proc f . Proc g = Proc (\x -> f (g x))
instance Arrow Proc where
arr f = Proc f
first (Proc f) = Proc (\(x,y) -> (f x, y))
This gives you the machinery to use the various arrow combinators (***)
, (&&&)
, (>>>)
, etc., as well as the arrow notation which is rather nice if you are doing complex things. So, as Daniel Fischer points out in the comment, the pipeline you described in your question could be composed as:
reader >>> partition >>> (handler1 *** handler2) >>> writer
But the cool thing is that it is up to you what you mean by a processor. It is possible to implement what you mentioned about each processor forking a thread in a similar way, using a different processor type:
newtype Proc' a b = Proc (Source a -> Sink b -> IO ())
And then implementing the combinators appropriately.
So that is what you are looking at: a vocabulary for talking about composing processes, which has a little bit of code to reuse, but primarily will help guide your thinking as you implement these combinators for the definition of processor that is useful in your domain.
One of my first nontrivial Haskell projects was to implement an arrow for quantum entanglement; that project was the one that caused me to really start to understand the Haskell way of thinking, a major turning point in my programming career. Maybe this project of yours will do the same for you? :-)