Search code examples
haskellprogramming-languagesinterpreterlambda-calculus

How to model the output of the binary lambda calculus?


I am trying to write an interpreter for John Tromp's binary lambda calculus

I have written code to do the following:

  1. Parse the binary input into some data structure representing the regular untyped lambda calculus
  2. Beta-reduce this term

What happens then?

  • How is the "output" interpreted?
  • Is the output
    • a) the resulting term translated back into binary via the same encoding,
    • or b) the bitstream encoded by a list of False-terminated booleans?
  • (And what happens if the output does not form such a list?)

Or am I misunderstanding how BLC works?


Solution

  • I would suggest using http://www.ioccc.org/2012/tromp/hint.html as your primary reference. The Wikipedia page is probably fine, but his original notes on BLC are quite good.

    On the topic of input and output, he has this to say:

    More specifically, it defines a universal machine, which, from an input stream of bits, parses the binary encoding of a lambda calculus term, applies that to the remainder of input (translated to a lazy list of booleans, which have a standard representation in lambda calculus), and translates the evaluated result back into a stream of bits to be output.