Today I will be looking at implementing in OCaml one of the most recent and praised ones: Parametric HOAS (PHOAS). For some reason that I detail hereafter, the traditional way of implementing PHOAS can be cumbersome, and I propose an alternative here, which happens to be one of my very first uses of first-class modules.

The idea of Parametric HOAS goes back Geoffrey Washburn and Stephanie Weirich, at least, and was named this way and formalized in Coq by Adam Chlipala (I recommend the latter, which is a good read). In a nutshell, you encode binders in the language by OCaml binders (like in HOAS), but the term data structure is parameterized by the type `'x`

of variables. We will call these *pre-terms*:

type 'x t = (* pre-term *) | Lam of ('x -> 'x t) | App of 'x t * 'x t | Var of 'x

For instance, this is a perfectly valid pre-term:

let ex : float t = Lam (fun x -> App (Var 3.14, Var x))

Its variables have floats dangling from them. But for a pre-term to become a *real* term, it has to make no assumption on the type of variables. Let’s encode this with an explicit universal quantification in a record:

type tm = {t : 'x. 'x t}

This parametrization rules out the previous pre-term. It is also what makes it impossible to define so-called “exotic” terms. In fact, there are as many inhabitant to this type than there are λ-terms.

This is an example of a *real* term:

let ex : tm = {t = Lam (fun x -> Lam (fun y -> App (Var x, Var y)))}

So, each term comes in a little package that seals the representation of variables to “unknown”. If you open this little box, you can set it however you want, as long as, when you close it back, it is back to “unknown”. This is how you define recursive functions on terms: a main function opens the package, and an auxiliary one works with the pre-term inside, instantiating variables as it pleases. Here, the classic example function counting the number of variables in a term:

let count : tm -> int = let rec aux : unit t -> int = function | Var () -> 1 | App (m, n) -> aux m + aux n | Lam f -> aux (f ()) in fun {t} -> aux t

When I go under a λ-abstraction, I fill the open holes with `()`

(because here there is no information to carry); so during the recursion, the pre-term can have variables instantiated with `().`

This is all well and good, but this encoding can be quite cumbersome for practical reasons. Look again at the definition of pre-terms. Imagine that there is not 3 but 30 constructors, with not one but five kinds of variables, i.e. five parameters: each time, I would have to pass them to each subterm. Who wants to read 30 lines of code of this kind?

type ('x, 'y, 'z, 't, 'u) t = | Var1 of 'x | Var2 of 'y | App of ('x, 'y, 'z, 't, 'u) t * ('x, 'y, 'z, 't, 'u) t

Variable types are never changed, and just passed on untouched to the next subterm. It is like defining a recursive function which never let one of its arguments vary: you want to lambda-drop it. In Coq, this is easy thanks to sections, which give you the illusion of lambda-dropping, but OCaml does not have this feature. Nonetheless, let us do exactly this: lambda-drop a parametric type definition into… a module.

First, we are going to factor out all these occurrences of the ‘x type parameter by lifting type t into a functor.

module type X = sig type x end module Tm (X : X) = struct open X type t = | Lam of (x -> t) | App of t * t | Var of x end

Pre-term is not a parametric type, it is a functor (a parametric structure). Note that the type of variables is “lambda-dropped” to the parameter of the functor, so the definition of type t is much less verbose; and it would stay as concise with 5 different kinds of variables. For instance, this is a valid pre-term:

module Pt = T (struct type x = float end) let ex : Pt.t = Pt.(Lam (fun x -> App (Var x, Var 3.14)))

OCaml does not let us instantiate functors in type definitions, so we must do this in two steps: first declare the module by instantiating the functor, and then giving an inhabitant to it.

Once again, For a pre-term to become a real term, it has to make no assumption on its “variable type” module. Let’s encode this by a functor, from the structure defining variables to a structure defining an inhabitant of the pre-term type:

module type TM = functor (X:X) -> sig val t : Tm(X).t end

Now, thanks to first-class modules, we have the ability to treat an inhabitant of this module type as an usual value:

type tm = (module TM)

Here it is, that’s our type of terms.

Let me give you an example of term:

let ex : tm = (module functor (X:X) -> struct module T = Tm(X) let t = T.(Lam (fun x -> Lam (fun y -> App (Var x, Var y)))) end : TM)

The value is a first-class module that is a functor, just like the `TM`

module type dictates. Yes, it is substantially more to write than in the previous section, but the overhead is fixed. Note that the type annotation on the module is necessary (I don’t know why).

A function on this term representation, e.g. the count example from before, has to first unpack the first class module Tm and instantiate it with the right kind of variable X, grab the pre-term t contained in it; Then an auxiliary recursive function can traverse this pre-term. All in all, we get:

let count : tm -> int = fun (module Tm) -> let module X = struct type x = unit end in let module TmX = Tm(X) in let module TX = T(X) in let open TX in let rec aux : T(X).t -> int = function | Lam f -> aux (f ()) | App (t, u) -> aux t + aux u | Var () -> 1 in aux TmX.t

Again, it seems like a lot to type, but the overhead is constant, so it has better chances to scale.

Here was my implementation of PHOAS with “lambda-lifting” of type arguments, thanks to first-class modules. I guess that this trick could be useful for other large type definitions involving constant parameters, for instance, do you know the recursion scheme programming pattern? Also, try to encode the *typed* λ-calculus this way, using GADTs; you will need type x to be parametric on a type ‘a, therefore encoding rank-2 polymorphism. I did not get there yet.

As a bonus, here is the boring but still relevant identity function on λ-terms, which has the advantage of returning a term (unlike `count`

):

let id : tm -> tm = fun (module Tm) -> (module functor (X:X) -> struct let t = let module TmX = Tm(X) in let module TX = T(X) in let open TX in let rec id : t -> t = function | Lam f -> Lam (fun x -> id (f x)) | App (t, u) -> App (id t, id u) | Var x -> Var x in id TmX.t end)

]]>

Say we have a continuation monad, but its answer type `o`

is fixed in advance:

type o type 'a m = ('a -> o) -> o let ret : 'a. 'a -> 'a m = fun v k -> k v let bind : 'a 'b. 'a m -> ('a -> 'b m) -> 'b m = fun x f k -> x (fun v -> f v (fun v -> k v))

Usually, we consider `o`

, the type of final answers to CPS, to be universally quantified, so that it can be instantiated to whatever we want. Then, as usual, we can run the computation by applying the identity continuation, i.e. we can instantiate type `o`

with `'a`

:

let run : 'a. 'a m -> 'a = fun f -> f (fun x -> x)

But here, we specified explicitly that `o`

was *not* our choice but was fixed (it is abstract), so the previous line gives a typing error: the identity function is *not* a valid continuation to give to our monadic value. I have seen this situation when using delimited continuations: the answer type gets instantiated when you use a local effect somewhere in your program. For instance, one-pass transformations are of this form; their type is in the lines of `exp -> (triv -> ser) -> ser`

(`triv`

is for trivial term, `ser`

for serious term), because when they return from their final continuation, they must be in the context of a serious term.

What to do then? how do you run a CPS computation when you cannot choose the answer type? Well, there is a way out, involving exceptions: instead of the identity continuation, pass a “trick continuation” that, when called at the end of the computation, will jump out of it (raise an exception) and return to a top-level exception handler. If the initial continuation is actually called, and in the dynamic scope of the exception handler, then we’ll have our result. Here it is:

let jump : 'a 'b. ('a -> 'b m) -> 'a -> 'b = fun (type b) f x -> let module M = struct exception E of b end in try ignore (f x (fun x -> raise (M.E x))); failwith "f was not pure" with M.E i -> i

A local exception is defined, which contains a value of the return type `b`

. This exception is eventually raised in the initial continuation, and the actual return value of function `f`

is ignored, because it is not supposed to return anymore.

So, as you see, `jump`

is turning a function defined in CPS, `f`

, into an “equivalent” direct style function, `jump f`

. Careful! It is equivalent if `f`

is applying its initial continuation, i.e. if `f`

has a pure, direct style counterpart. Otherwise, e.g. if `f`

drops its continuation at one point, then the exception might not be raised, `f`

might terminate and `jump f`

fail with my error `Failure("f was not pure")`

.

Let’s define the function λ*x*. (*x* + 1) = 0 in CPS and then test it on 1:

let isz x = ret (x=0) let succ x = ret (x+1) let () = assert (jump (fun x -> bind (succ x) isz) 0 = false)

I can even use `callcc`

in my programs, which proves that *some* effects are actually OK:

let callcc : 'a 'b. (('a -> 'b m) -> 'a m) -> 'a m = fun f k -> f (fun v x -> k v) (fun v -> k v) let () = assert (jump (fun x -> callcc (fun k -> bind (k x) (fun v -> ret (1 + v)))) 1 = 1)

What I cannot do is instantiate `o`

and bypass the initial continuation:

type o = An_inhabitant let () = ignore (jump (fun x k -> An_inhabitant) 1)

This last example raises the exception `Failure("f was not pure")`

Now my questions to you, acute and knowledgeable reader, are:

- First, is this
`jump`

operator as well-behaved as I think? Precisely what kind of effects can trigger an error? Also, can we make the exception`M.E`

escape its scope? - Secondly, what does this mean, on the other side of the Curry-Howard looking glass? How can I interpret this result proof-theoretically?
`jump`

is a theorem close to, but weaker than double negation elimination it seems.

I put the code snippets in the order that I think is more pedagogical, and leave to the reader to reorganize them in the right one.

First, as I hinted previously, we are annotating formulas, conjunctions and disjunctions with their corresponding OCaml type, in order to reason on these types:

type 'a atom = int type 'a form = | X : 'a atom -> 'a form | And : 'a form * 'b form -> ('a * 'b) form | Or : 'a form * 'b form -> ('a, 'b) union form type 'a conj = | X : 'a atom -> 'a conj | And : 'a atom * 'b conj -> ('a * 'b) conj type 'a disj = | Conj : 'a conj -> 'a disj | Or : 'a conj * 'b disj -> ('a, 'b) union disj

What we are eventually looking for is a function `dnf`

mapping an `'a form `

to a `'b disj`

, but now these two must be related: they must represent two *equivalent* formulae. So, correcting what I just said: `dnf`

must return the pair of a `'b disj`

and a proof that `'a`

and `'b`

are equivalent. This pair is an existential type, which is easily coded with a GADT (we do similarly for conjunctions):

type 'a cnj = Cnj : 'b conj * ('a, 'b) equiv -> 'a cnj type 'a dsj = Dsj : 'b disj * ('a, 'b) equiv -> 'a dsj

Let’s leave out the definition of `equiv`

for a while. Now the code from the previous post is fairly easily adapted:

let rec conj : type a b. a conj -> b conj -> (a * b) cnj = fun xs ys -> match xs with | X x -> Cnj (And (x, ys), refl) | And (x, xs) -> let Cnj (zs, e) = conj xs ys in Cnj (And (x, zs), lemma0 e) let rec disj : type a b. a disj -> b disj -> (a, b) union dsj = fun xs ys -> match xs with | Conj c -> Dsj (Or (c, ys), refl) | Or (x, xs) -> let Dsj (zs, e) = disj xs ys in Dsj (Or (x, zs), lemma1 e) let rec map : type a b. a conj -> b disj -> (a * b) dsj = fun x -> function | Conj y -> let Cnj (z, e) = conj x y in Dsj (Conj z, e) | Or (y, ys) -> let Cnj (z, e1) = conj x y in let Dsj (t, e2) = map x ys in Dsj (Or (z, t), lemma2 e1 e2) let rec cart : type a b. a disj -> b disj -> (a * b) dsj = fun xs ys -> match xs with | Conj c -> map c ys | Or (x, xs) -> let Dsj (z, e1) = map x ys in let Dsj (t, e2) = cart xs ys in let Dsj (u, e3) = disj z t in Dsj (u, lemma3 e1 e2 e3) let rec dnf : type a. a form -> a dsj = function | X x -> Dsj (Conj (X x), refl) | Or (a, b) -> let Dsj (c, e1) = dnf a in let Dsj (d, e2) = dnf b in let Dsj (e, e3) = disj c d in Dsj (e, lemma4 e1 e2 e3) | And (a, b) -> let Dsj (c, e1) = dnf a in let Dsj (d, e2) = dnf b in let Dsj (e, e3) = cart c d in Dsj (e, lemma5 e1 e2 e3)

It seems more verbose, but since the functions now return existentials, we need to deconstruct them and pass them around. I abstracted over the combinators that compose the proofs of equivalence `lemma1`

…`lemma5`

, we’ll deal with them in a moment. For now, you can replace them by `Obj.magic`

and read off their types with `C-c C-t`

to see if they make sense logically. Look at the last function’s type. It states, as expected: for any formula , there exists a disjuctive normal form such that .

Now on this subject, what is it for two types to be equivalent? Well, that’s the “trick”: let’s just use our dear old Curry-Howard correspondence! `'a`

and `'b`

are equivalent if there are two functions `'a -> 'b`

and `'b -> 'a`

(provided of course that we swear to use only the purely functional core of OCaml when giving them):

type ('a, 'b) equiv = ('a -> 'b) * ('b -> 'a)

Now we can state and prove a number of small results on equivalence with respect to the type constructors we’re using (pairs and unions). Just help yourself into these if you’re preparing an exercise sheet on Curry-Howard :)

(* a = a *) let refl : type a. (a, a) equiv = (fun a -> a), (fun a -> a) (* a = b -> b = a *) let sym : type a b. (a, b) equiv -> (b, a) equiv = fun (ab, ba) -> (fun b -> ba b), (fun a -> ab a) (* a = b -> b = c -> a = c *) let trans : type a b c. (b, c) equiv -> (a, b) equiv -> (a, c) equiv = fun (bc, cb) (ab, ba) -> (fun a -> bc (ab a)), (fun c -> ba (cb c)) (* a = b -> c = d -> c * a = d * b *) let conj_morphism : type a b c d. (a, b) equiv -> (c, d) equiv -> (c * a, d * b) equiv = fun (ab, ba) (cd, dc) -> (fun (c, a) -> cd c, ab a), (fun (c, b) -> dc c, ba b) let conj_comm : type a b. (a * b, b * a) equiv = (fun (x, y) -> y, x), (fun (x, y) -> y, x) (* (a * b) * c = a * (b * c) *) let conj_assoc : type a b c. ((a * b) * c, a * (b * c)) equiv = (fun ((x, y), z) -> x, (y, z)), (fun (x, (y, z)) -> (x, y), z) (* a = b -> c + a = c + b *) let disj_morphism : type a b c d. (a, b) equiv -> (c, d) equiv -> ((c, a) union, (d, b) union) equiv = fun (ab, ba) (cd, dc) -> (function Inl c -> Inl (cd c) | Inr a -> Inr (ab a)), (function Inl d -> Inl (dc d) | Inr b -> Inr (ba b)) (* (a + b) + c = a + (b + c) *) let disj_assoc : type a b c. (((a, b) union, c) union, (a, (b, c) union) union) equiv = (function Inl (Inl a) -> Inl a | Inl (Inr b) -> Inr (Inl b) | Inr c -> Inr (Inr c)), (function Inl a -> Inl (Inl a) | Inr (Inl b) -> Inl (Inr b) | Inr (Inr c) -> Inr c) (* a * (b + c) = (a * b) + (a * c) *) let conj_distrib : type a b c. (a * (b, c) union, (a * b, a * c) union) equiv = (function a, Inl b -> Inl (a, b) | a, Inr c -> Inr (a, c)), (function Inl (a, b) -> a, Inl b | Inr (a, c) -> a, Inr c)

Finally, thanks to these primitive combinators, we can prove the lemmas we needed. Again, these are amusing little exercises.

let lemma0 : type a b c d. (a * b, c) equiv -> ((d * a) * b, d * c) equiv = fun e -> trans (conj_morphism e refl) conj_assoc let lemma1 : type a b c d. ((a, b) union, d) equiv -> (((c, a) union, b) union, (c, d) union) equiv = fun e -> trans (disj_morphism e refl) disj_assoc let lemma2 : type a c d u v. (a * c, u) equiv -> (a * d, v) equiv -> (a * (c, d) union, (u, v) union) equiv = fun e1 e2 -> trans (disj_morphism e2 e1) conj_distrib let lemma3 : type a b c d e f. (a * b, c) equiv -> (d * b, e) equiv -> ((c, e) union, f) equiv -> ((a, d) union * b, f) equiv = fun e1 e2 e3 -> trans e3 (trans (disj_morphism e2 e1) (trans (disj_morphism conj_comm conj_comm) (trans conj_distrib conj_comm))) let lemma4 : type a b c d e. (a, c) equiv -> (b, d) equiv -> ((c, d) union, e) equiv -> ((a, b) union, e) equiv = fun e1 e2 e3 -> trans e3 (disj_morphism e2 e1) let lemma5 : type a b c d e. (a, c) equiv -> (b, d) equiv -> (c * d, e) equiv -> ((a * b), e) equiv = fun e1 e2 e3 -> trans e3 (conj_morphism e2 e1)

Note that I only needed the previous primitives to prove these lemmas (and as such to define my functions), so we can even make the type `equiv`

abstract, provided that we are giving a complete set of primitives (which is not the case here). Although I’m not sure what it would buy us…

Anyway. That’s my solution! What’s yours?

]]>Given a formula built out of conjunction, disjunction and atoms, return its disjunctive normal form, *in big step* or *natural semantics*, that is, not applying repetitively the distributivity and associativity rules, but in a single function run. Before you go any further, please give it a try and send me your solution!

Formulas are described by the type `form`

:

type atom = int type form = | X of atom | And of form * form | Or of form * form

To ensure the correctness of the result, I represent disjunctive normal form by a restriction of this type, `disj`

, by stratifying it into conjunctions and disjunctions:

type conj = X of atom | And of atom * conj type disj = Conj of conj | Or of conj * disj

There are actually two restrictions at stake here: first, conjunctions cannot contain disjunctions, and second, all operators are necessarily right-associative. Constructor `Conj`

is just a logically silent coercion. If you look carefully enough, you will notice that `conj`

(resp. `disj`

) is isomorphic to a non-empty list of `atom`

s (resp. `conj`

).

The first step is to lift the second restriction (associativity), and prove that we can always build a conjunction of `conj`

, resp. a disjunction of `disj`

. Easy enough if you think about lists: these functions look like concatenation.

let rec conj : conj -> conj -> conj = fun xs ys -> match xs with | X x -> And (x, ys) | And (x, xs) -> And (x, conj xs ys) let rec disj : disj -> disj -> disj = fun xs ys -> match xs with | Conj c -> Or (c, ys) | Or (x, xs) -> Or (x, disj xs ys)

Then, we will lift the second restriction, using distributivity. We must show that it is always possible to form a conjunction. First, we show how to build the conjunction of a `conj`

and a `disj`

:

let rec map : conj -> disj -> disj = fun x -> function | Conj y -> Conj (conj x y) | Or (y, ys) -> Or (conj x y, map x ys)

The first case is trivial, the second reads as the distributivity of conjunction over disjunction. By analogy to lists again, I called this function `map`

because it maps function `conj x`

to all cells of the list.

Next, we show how to build the conjunction of two `disj`

:

let rec cart : disj -> disj -> disj = fun xs ys -> match xs with | Conj c -> map c ys | Or (x, xs) -> disj (map x ys) (cart xs ys)

Considering the meaning of the previously defined functions, the first case is trivial, and the second, again, reads as distributivity, only in the other direction. I called this function `cart`

because it computes the cartesian product of the two “lists” passed as arguments (only on non-empty lists).

Now we can easily put together the final function computing the DNF:

let rec dnf : form -> disj = function | X x -> Conj (X x) | Or (a, b) -> disj (dnf a) (dnf b) | And (a, b) -> cart (dnf a) (dnf b)

It is easy to see that all function terminate: they are primitive recursive.

Wait, let’s not forget to test our contraption:

let () = assert (dnf (Or (And (X 1, X 2), X 3)) = Or (And (1, X 2), Conj (X 3))); assert (dnf (And (Or (X 1, X 2), X 3)) = Or (And (1, X 3), Conj (And (2, X 3)))); assert (dnf (And (Or (And (X 0, X 1), X 2), And (X 3, X 4))) = Or (And (0, And (1, And (3, X 4))), Conj (And (2, And (3, X 4)))))

That’s my solution. Reader, is there another one? Is there a better explanation, for instance based on Danvy’s small-step to big-step derivation? I believe there is…

Technically, there still could be bugs in this code. Prove that it is correct wrt. the small-step rewrite rules, using only OCaml and GADTs. Here is the beginning of an idea: annotate `form`

, `conj`

and `disj`

with their meaning in terms of OCaml types:

type ('a, 'b) union = Inl of 'a | Inr of 'b type 'a atom = int type 'a form = | X : 'a atom -> 'a form | And : 'a form * 'b form -> ('a * 'b) form | Or : 'a form * 'b form -> ('a, 'b) union form type 'a conj = | X : 'a atom -> 'a conj | And : 'a atom * 'b conj -> ('a * 'b) conj type 'a disj = | Conj : 'a conj -> 'a disj | Or : 'a conj * 'b disj -> ('a, 'b) union disj

(the definition of `union`

is irrelevant here), state the relation between equivalent types as a type:

type ('a, 'b) equiv = | Refl : ('a, 'a) equiv | Trans : ('a, 'b) equiv * ('b, 'c) equiv -> ('a, 'c) equiv | AssocA : (('a * 'b) * 'c, 'a * ('b * 'c)) equiv | AssocO : ((('a, 'b) union, 'c) union, ('a, ('b, 'c) union) union) equiv | DistribL : ('a * ('b, 'c) union, ('a, 'b) union * ('a, 'c) union) equiv | DistribR : (('b, 'c) union * 'a, ('b, 'a) union * ('c, 'a) union) equiv

pack up a solution as an existential: an equivalence proof together with a DNF:

type 'a dnf = Dnf : ('a, 'b) equiv * 'b disj -> 'a dnf

and code a function:

let dnf : type a. a form -> a dnf = function _ -> (assert false) (* TODO *)

Ok fair enough, it’s not an exercise, it’s something I haven’t been able to put together yet ;)

]]>Encoding some simply-typed languages with GADTs is now routine for a lot of OCaml programmers. You can even take (kind of) advantage of (some form of) convenient binding representation, like (weak) HOAS; you then use OCaml variables to denote your language’s variables. But what about pattern-matching? Patterns are possibly “deep”, i.e. they bind several variables at a time, and they don’t respect the usual discipline that a variable is bound for exactly its subterm in the AST.

It turns out that there is an adequate encoding, that relies on two simple ideas. The first is to treat variables in patterns as nameless placeholders bound by λ-abstractions on the right side of the arrow (this is how e.g. Coq encodes matches: `match E₁ with (y, z) -> E₂`

actually is sugar for `match E₁ with (_, _) -> fun x y -> E₂`

); the second is to thread and accumulate type arguments in a GADT, much like we demonstrated in our `printf`

example recently.

The ideas probably extends seamlessly to De Bruijn indices, by threading an explicit environment throughout the term. It stemmed from a discussion on LF encodings of pattern-matching with Francisco over lunch yesterday: what I will show enables also to represent adequately pattern-matching in LF, which I do not think was ever done this way before.

Let’s start with two basic type definitions:

type ('a, 'b) union = Inl of 'a | Inr of 'b type ('a, 'b) pair = Pair of 'a * 'b

First, I encode simply-typed λ-expressions with sums and products, in the very standard way with GADTs: I annotate every constructor by the (OCaml) type of its denotation.

type 'a exp = | Lam : ('a exp -> 'b exp) -> ('a -> 'b) exp | App : ('a -> 'b) exp * 'a exp -> 'b exp | Var : 'a -> 'a exp | Pair : 'a exp * 'b exp -> ('a, 'b) pair exp | Inl : 'a exp -> (('a, 'b) union) exp | Inr : 'b exp -> ('a, 'b) union exp | Unit : unit exp

At this point, I only included data type *constructors*, not their *destructors*. These are replaced by a pattern-matching construct: it takes a scrutinee of type `'s`

, and a list of branches, each returning a value of the same type `'c`

.

| Match : 's exp * ('s, 'c) branch list -> 'c exp

Now, each branch is the pair of a pattern, possibly deep, possibly containing variables, and an expression where all these variables are bound.

(* 's = type of scrutinee; 'c = type of return *) and ('s, 'c) branch = | Branch : ('s, 'a, 'c exp) patt * 'a -> ('s, 'c) branch

To account for these bindings, I use a trick when defining patterns that is similar to the one used for printf with GADTs. In the type of the `Branch`

constructor, the type `'a`

is an “accumulator” for all variables appearing in the pattern, eventually returning a `'c exp`

. For instance, annotation `'a`

for a pattern that binds two variables of type `'a -> 'b`

and `'a`

would be `('a -> 'b) exp -> 'a exp -> 'c exp`

.

Let’s define type `patt`

. Note that it also carries and checks the annotation `'s`

for the type of the scrutinee. The first three cases are quite easy:

(* 's = type of scrutinee; 'a = accumulator for to bind variables; 'c = type of return *) and ('s, 'a, 'c) patt = | PUnit : (unit, 'c, 'c) patt | PInl : ('s, 'a, 'c) patt -> (('s, 't) union, 'a, 'c) patt | PInr : ('t, 'a, 'c) patt -> (('s, 't) union, 'a, 'c) patt

Now, the variable case is just a nameless dummy that “stacks up” one more argument in the “accumulator”, i.e. what will be the type of the right-hand side of the branch:

| X : ('s, 's exp -> 'c, 'c) patt

Finally, the pair case is the only binary node. It need to thread the accumulator, to the left node, then to the right.

| PPair : ('s, 'a, 'b) patt * ('t, 'b, 'c) patt -> (('s, 't) pair, 'a, 'c) patt

Note that it is possible to swap the two sides of the pair; we would then bind variables in the opposite order on the right-hand side.

That’s the encoding. Note that it ensures only well-typing of terms, not exhaustiveness of patterns (which is another story that I would like to tell in a future post).

Here are a couple of example encoded terms: first the shallow, OCaml value, then its representation:

let ex1 : = fun x -> match x with | Inl x -> Inr x | Inr x -> Inl x let ex1_encoded : 'a 'b. (('a, 'b) union -> ('b, 'a) union) exp = Lam (fun x -> Match (x, [ Branch (PInl X, fun x -> Inr x); Branch (PInr X, fun x -> Inl x); ])) let ex2 : 'a 'b. ((('a, 'b) union, ('a, 'b) union) pair -> ('a, 'b) union) = fun x -> match x with | Pair (x, Inl _) -> x | Pair (Inr _, x) -> x | Pair (_, Inr x) -> Inr x let ex2_encoded : 'a 'b. ((('a, 'b) union, ('a, 'b) union) pair -> ('a, 'b) union) exp = Lam (fun x -> Match (x, [ Branch (PPair (X, PInl X), (fun x _ -> x)); Branch (PPair (PInr X, X), (fun _ x -> x)); Branch (PPair (X, PInr X), (fun _ x -> Inr x)); ]))

Finally, we can code an evaluator for this language. It takes an expression to its (well-typed) denotation. The first few lines are standard:

let rec eval : type a. a exp -> a = function | Lam f -> fun x -> eval (f (Var x)) | App (m, n) -> eval m (eval n) | Var x -> x | Pair (m, n) -> Pair (eval m, eval n) | Inl m -> Inl (eval m) | Inr m -> Inr (eval m) | Unit -> () | Match (m, bs) -> branches (eval m) bs

Now for pattern-matching, we call an auxilliary function `branches`

that will try each branch sequentially:

and branches : type s a. s -> (s, a) branch list -> a = fun v -> function | [] -> failwith "pattern-matching failure" | Branch (p, e) :: bs -> try eval (branch e (p, v)) with Not_found -> branches v bs

A branch is tested by function `branch`

, which is where the magic happens: it matches the pattern and the value of the scrutinee, and returns a (potentially only) partially applied resulting expression. The first cases are self-explanatory:

and branch : type s a c. a -> (s, a, c) patt * s -> c = fun e -> function | PUnit, () -> e | PInl p, Inl v -> branch e (p, v) | PInr p, Inr v -> branch e (p, v) | PInl _, Inr _ -> raise Not_found | PInr _, Inl _ -> raise Not_found

In the variable case, we know that `e`

is a function that expects an argument: the value `v`

of the scrutinee.

| X, v -> e (Var v)

The pair case is simple and beautiful: we just compose the application of `branch`

on both sub-patterns.

| PPair (p, q), Pair (v, w) -> branch (branch e (p, v)) (q, w)

That’s it. Nice eh? There are two obvious questions that I leave for future posts: can we compile this encoding down to simple case statement, with the guarantee of type preservation? and could we enhance the encoding such as to guarantee statically exhaustiveness?

See you soon!

]]>Besides the enhanced form and better explanations, we included a completely new section on what it means for NbE to be written in Continuation-Passing Style, that I am particularly excited about. This allowed us to extend our typeful NbE formalization beyond the minimal λ-calculus to sums and **call/cc** (which is known to be difficult). Taking our code, you can write a program with **call/cc** and **if** statements, and partially evaluate it: all redexes will be removed and your code will be specialized. All this, as before, is done *typefully*, thanks to OCaml’s GADTs: this means that the transformation is guaranteed to map well-typed terms to well-typed normal forms.

What I really liked about working on program transformations with GADTs, is that they enable an efficient methodology to devise your tranformations, and read off of them the (typed) target language. Let me give you an example. Say we write the canonical, direct-style evaluator for the λ-calculus:

let rec eval = function | Lam f -> Fun (fun x -> eval (f (Var x))) | App (m, n) -> let Fun f = eval m in f (eval n) | Var x -> x

If the input language is simply typed:

type 'a tm = | Lam : ('a tm -> 'b tm) -> ('a -> 'b) tm | App : ('a -> 'b) tm * 'a tm -> 'b tm | Var : 'a vl -> 'a tm

then so can be the output language:

type 'a vl = Fun : ('a vl -> 'b vl) -> ('a -> 'b) vl

and the type of `eval`

then guarantees to preserve types: `type a. a tm -> a vl`

. Up to here, no big news. Now what if I CPS-transform the code above? Following the usual, call-by value CPS transformation, I get:

type o let rec eval : type a. a tm -> (a vl -> o) -> o = function | Lam f -> fun k -> k (Fun (fun x k -> eval (f (Var x)) k)) | App (m, n) -> fun k -> eval m (fun (Fun v1) -> eval n (fun v2 -> v1 v2 k)) | Var x -> fun k -> k x

My input language is unchanged, but what about the output values? As you can hint from the `Lam`

and `App`

cases, the type of constructor `Fun`

has been changed. Using type inference, type errors and `C-c C-t`

in my `tuareg-mode`

, I can read off the new type it should have:

type 'a vl = Fun : ('a vl -> ('b vl -> o) -> o) -> ('a -> 'b) vl

Yes, this is the type of CPS values! For instance, I can write the CPS-transformed applicator:

let app : type a b. ((a -> b) -> a -> b) vl = Fun (fun (Fun f) k -> k (Fun (fun x k -> f x k)))

The same way, if I write the usual Normalization by Evaluation algorithm from typed values to typed normal forms and CPS-transform it, I can deduce the specialized syntax of normal forms in CPS:

S ::= let v = R M in S | return k M

M ::= λxk. S | R

R ::= x | v

If this excites you or puzzles you, come read our draft, and do not hesitate to leave any comment below!

]]>Have you ever implemented an quick prototype for a language, and be annoyed by the lack of definition mechanism? For instance, you define a small calculus and encode a few constructs to test it, but end up with outputs like:

((\n. \p. \f. \x. n f (p f x)) (\f. \x. f (f x)) (\f. \x. f (f x))) (\b. (\b. \x. \y. b x y) b (\x. \y. x) (\x. \y. x))

when you only wanted the system to print:

2 + 2

(these two constants being Church-encoded in the λ-calculus, FWIW).

One possibility, the Right One, is to add a definition construct to your language, together with a map from name to definition:

type exp = | ... | Def of string type env = (string * exp) list type program = env * exp

Some would call this a *deep encoding* of definitions. But it is unfortunately very boring: for each function traversing your programs, you will now have to add a case that traverses constructs by looking up their definitions.

Here is another, *shallow* solution: keep the expressions as they are, and just have a global, reverse map from expression pointers to names. Each time you want to pretty-print a term, first look if it is not associated with a name in the table. Let me implement that.

First, we will need a simplistic map module with pointer equality comparison:

module QMap (T : sig type t type u end) : sig open T val register : u -> t -> t val lookup_or : (t -> u) -> t -> u end = struct let tbl = ref [] let register v x = tbl := (x, v) :: !tbl; x let lookup_or f x = try List.assq x !tbl with Not_found -> f x end

It is implemented as a list (we can’t really do better than this), and the `lookup`

function first tries to find a match with the same memory address (function `List.assq`

), or applies a certain function in case of failure.

Then we define our language (here the λ-calculus), and instantiate the functor with a target type of strings:

type exp = | Lam of string * exp | App of exp * exp | Var of string include QMap (struct type t = exp type u = string end)

Let’s now encode some (classic) example expressions for testing:

(* church numerals *) let num n = let rec aux = function | 0 -> Var "x" | n -> App (Var "f", aux (n-1)) in register (string_of_int n) (Lam ("f", Lam ("x", aux n))) (* addition *) let add = register "add" (Lam ("n", Lam ("p", Lam ("f", Lam ("x", App (App (Var "n", Var "f"), App (App (Var "p", Var "f"), Var "x")))))))

Notice how, as we define these encodings, we give them a name by registering them in the map. Now defining the pretty-printer:

let rec to_string () = let rec lam = function | Lam (x, m) -> "\\" ^ x ^ ". " ^ lam m | e -> app e and app = function | App (m, n) -> app m ^ " " ^ to_string () n | f -> to_string () f in lookup_or (function | Var s -> s | m -> "(" ^ lam m ^ ")")

Notice the use of function `lookup_or`

? At each iteration, we look in the table for a name, and either return it or continue pretty-printing. (Before you ask, the unit argument to `to_string`

is there to convince OCaml that we are indeed defining a proper recursive function, which I wish it could find out by itself).

That’s it! Now we can ask, say, to print a large term composed by our previous definitions:

print_string (to_string () (App (App (add, num 2), num 2)));;

and get this output:

(add 2 2)

while retaining the structure of the underlying term. I could then go ahead and define various transformations, interpreters etc. As long as they preserve the memory location of unchanged terms, (i.e. do not reallocate too much), my pretty-printing will be well… pretty.

Does this scale up? Probably not. What else can we do with this trick? Memoization anyone?

]]>Since the introduction of GADTs in OCaml, a whole new realm of applications emerged, the most well-known being to faithfully represent typed languages: it allows to define compact and correct interpreters, and type-preserving program transformations. Ok, if you never saw this, here is a small snippet that should be self-explanatory:

type _ exp = | Val : 'a -> 'a exp | Eq : 'a exp * 'a exp -> bool exp | Add : int exp * int exp -> int exp let rec eval : type a. a exp -> a = function | Val x -> x | Eq (e1, e2) -> eval e1 = eval e2 | Add (e1, e2) -> eval e1 + eval e2 ;; assert (eval (Eq (Add (Val 2, Val 2), Val 4)))

Nice, isn’t it? Our question was: if we can so beautifully go from typed syntax (expressions) to typed semantics (OCaml values), can we do the converse, i.e., go from an OCaml value back to a (normal) expression of the same type? In other words, can we implement NbE in a typeful manner, using GADTs?

The answer is a proud “yes”, but more interestingly, we all learned some interesting GADT techniques and puzzling logical interpretations on the way. Read on to know more!

Tagless and Typeful Normalization by Evaluation using Generalized Algebraic Data TypesWe present the first tagless and typeful implementation of normalization by evaluation for the simply typed λ-calculus in OCaml, using Generalized Algebraic Data Types (GADTs). In contrast to normalization by reduction, which is operationally carried out by repeated instances of one-step reduction, normalization by evaluation uses a non-standard model to internalize intermediate results: it is defined by composing a non-standard evaluation function with a reification function that maps the denotation of a term into the normal form of this term. So far, normalization by evaluation has been implemented either in dependently-typed languages such as Coq or Agda, or in general-purpose languages such as Scheme, ML or Haskell. In the latter case, denotations are either tagged or Church-encoded. Tagging means injecting denotations, either implicitly or explicitly, into a universal domain of values; Church encoding is based on the impredicativity of the metalanguage. Here, we show how to obtain not only tagless values, making evaluation efficient, but also typeful reification, guaranteeing type preservation and η-long, β-normal forms. To this end, we put OCaml’s GADTs to work. Furthermore, our implementation does not depend on any particular representation of binders (HOAS, de Bruijn levels or indices) nor on the style (direct style, continuation-passing style, etc.) of the non-standard evaluation function.

PS: My previous draft, Proofs, upside down was accepted for presentation at APLAS 2013. See you in Melbourne!

]]>let rec tower_rec = function | [] -> 1 | x :: xs -> x ∗∗ tower_rec xs let tower xs = tower_rec xs

written in “direct style”, and that equivalent, iterative version:

let rec tower_acc acc = function | [] -> acc | x :: xs -> tower_acc (x ∗∗ acc) xs let tower xs = tower_acc 1 (List.rev xs)

written in “accumulator-passing style”. And that relationship is the composition of CPS-transformation, defunctionalization and reforestation, the well-known transformations we all came to know and love!

I hope you enjoy it. Of course, any comment will be *much* appreciated, so don’t hesitate to drop a line below!

]]>

Proofs, upside down

A functional correspondence between natural deduction and the sequent calculus

It is well-known in proof theory that sequent calculus proofs differ from natural deduction proofs by “reversing” elimination rules upside down into left introduction rules. It is also well-known that to each recursive, functional program corresponds an equivalent iterative, accumulator-passing program, where the accumulator stores the continuation of the iteration, in “reversed” order. Here, we compose these remarks and show that a restriction of the intuitionistic sequent calculus, LJT, is exactly an accumulator-passing version of intuitionistic natural deduction NJ. More precisely, we obtain this correspondence by applying a series of off-the-shelf program transformations à la Danvy et al. on a type checker for the bidirectional λ-calculus, and get a type checker for the λ-calculus, the proof term assignment of LJT. This functional correspondence revisits the relationship between natural deduction and the sequent calculus by systematically deriving the rules of the latter from the former, and allows to derive new sequent calculus rules from the introduction and elimination rules of new logical connectives.

`(==)`

operator in OCaml), then you are actually working in a “weakly impure” language, and you can for example implement a limited form of `gensym`

. What? `gensym`

is this classic “innocuously effectful” function returning a different `malloc`

: it will return a “fresh” pointer where to store your data. `malloc`

and the garbage collector together ensures this freshness condition, and you can then compare two pointers with `(==)`

. As a bonus, you can even store data along your fresh symbol.
In this post, I’ll exploit that simple idea to develop an assembler for a little stack machine close to that of OCaml.

In OCaml, something as simple as this is a `gensym`

:

type 'a sym = C of 'a let gensym x = C x

Each call to say `gensym ()`

will allocate one new data block in memory; you can then compare two symbols with the physical equality `(==)`

.What we care about here is not the content of that memory span, but its *address*, which is unique.

A few warnings first: in OCaml, the constructor must have arguments, otherwise the compiler optimizes the representation to a simple integer and nothing is allocated. Also, don’t replace the argument `x`

to `C`

by a constant, say `()`

, in the function code: if you do so, the compiler will place value `C ()`

in the data segment of the program, and calling `gensym`

will not trigger an allocation either. There is an excellent and already classic series of blog post about OCaml’s value representation here.

Another way of saying the same thing is that (non-cyclic) values in OCaml are not trees, as they can be thought of considering the purely functional fragment, but DAGs, that is trees with sharing.

I think that not many beginner/intermediate OCaml programmers realize the power of this, so I’d like to show a cool application of this remark. We will code a small compiler from a arithmetic language to a stack machine. Bear with me, it’s going to be fun!

The input language of expressions is:

type expr = | Int of int | Plus of expr * expr | If of expr * expr * expr

Its semantics should be clear, except for the fact that `If`

are like in C: if their condition is different than 0, then their first branch is taken; if it is 0, then the second is taken. Because we have these conditionals, the stack machine will need instructions to jump around in the code. The instructions of this stack machine are:

`Push i`

pushes`i`

on the stack;`Add`

pops two values off the stack and pushes their sum;`Halt`

stops the machine and returning the (supposedly unique) stack value;`Branch o`

skips the next`o`

instructions in the code;`Branchif o`

skips the next`o`

instructions*if*the top of the stack is not`0`

, and has no effect otherwise

For instance, the expression *1 + (if 0 then 2 else (3+3))* is compiled into:

[Push 1; Push 0; Branchif 3; Push 3; Push 3; Add; Branch 1; Push 2; Add; Halt]

and evaluates of course to `7`

. Notice how the two branches of the `If`

are turned around in the code? First, we’ve got the code of expression *2*, then the code of *3+3*. In general, expression *if e1 then e2 else e3* will be compiled to [*c1*; `Branchif`

(|*c3*|+1); *c3*; `Branch`

|*c2*|; *c2*; …] where *ci* is the compiled code of *ei*, and |*l*| is the size of code *l*. But I’m getting ahead of myself.

Now, compiling an `expr`

to a list of instructions in one pass would be a little bit messy, because we have to compute these integer offset for jumps. Let’s follow instead the common practice and first compile expressions to an assembly language where some suffixes of the code have *labels*, which are the names referred to by instructions `Branch`

and `Branchif`

. This assembly language `asm`

will then be well… assembled into actual `code`

, where jumps are translated to integer offsets. But instead of generating label names by side-effect as customary, let’s use our trick: we will refer to them by a unique *pointer* to the code attached to it. In other words, the arguments to `Branch`

and `Branchif`

will actually be pointers to `asm`

programs, comparable by `(==)`

.

To represent the `code`

and `asm`

data structures, we generalize over the notion of label:

type 'label instr = | Push of int | Add | Branchif of 'label | Branch of 'label | Halt

An assembly program is a list of instruction where labels are themselves assembly programs (the `-rectypes`

option of OCaml is required here):

type asm = asm instr list

For instance, taking our previous example,

Plus (Int 1, If (Int 0, Int 2, Plus (Int 3, Int 3)))

is compiled to the (shared) value:

Push 1 :: Push 0 :: let k = [Add; Halt] in Branchif (Push 2 :: k) :: Push 3 :: Push 3 :: Add :: k

See how the suffix `k`

(the continuation of the `If`

) is shared among the `Branchif`

and the main branch? In call-by-value, this is a value: if you reduce it any further by inlining `k`

, you will get a different value, that can be told apart from the first by using `(==)`

. So don’t let OCaml’s pretty-printing of values fool you: this is not a tree, the sharing of `k`

*is* important! What you get is the DAG of all possible execution traces of your program; they eventually all merge in one point, the code suffix `k = [Add; Halt]`

.

The compilation function is relatively straightforward; it’s an accumulator-based function:

let rec compile e k = match e with | Int i -> Push i :: k | Plus (e1, e2) -> compile e1 (compile e2 (Add :: k)) | If (e1, e2, e3) -> compile e1 (Branchif (compile e2 k) :: compile e3 k) let compile e = compile e [Halt]

The sharing discussed above is realized here in the `If`

case, by compiling its two branches using the accumulator (continuation) `k`

twice. Again, many people think of this erroneously as *duplicating* a piece of value. Actually, this is only mentioning twice a pointer to an already-allocated unique piece of value; and since we can compare pointers, we have a way to know that they are the same. Note also that this compilation function is purely compositional: to each subexpression corresponds a contiguous span of assembly code.

Now, real code for our machine is simply a list of instructions where labels are represented by (positive) integers:

type code = int instr list

Why positive? Well, since we have no way to make a loop, code can be arranged such that all jumps are made *forward* in the code.

The assembly function took me a while to figure out. It “linearizes” the assembly, a DAG, into a list by traversing it depth-first. The tricky part is that we don’t want to repeat the common suffixes of all branches; that’s where we use the fact that they are at the same memory address, which we can check with `(==)`

. If a piece of input code has already been compiled *n* instructions ahead in the output code, instead of repeating it we just emit a `Branch`

*n*.

So practically, we must keep as an argument an association list `k`

mapping already-compiled suffixes of the input to the corresponding output instruction; think of it as a kind of “cache” of the function. It also doubles as the *result* of the process: it is what’s eventually returned by `assemble`

. For each input `is`

, we first traverse that list `k`

looking for the pointer `is`

; if we find it, then we have our `Branch`

instruction; otherwise, we assemble the next instruction. This first part of the job corresponds to the `assemble`

function:

let rec assemble is k = try (is, Branch (List.index (fun (is', _) -> is == is') k)) :: k with Not_found -> assem is k

(`List.index p xs`

returns the index of the first element `x`

of `xs`

such that `p x`

is `true`

).

Now the auxiliary function `assem`

actually assembles instructions into a list of pairs of source programs and target instruction:

and assem asm k = match asm with | (Push _ | Add | Halt as i) :: is -> (asm, i) :: assemble is k | Branchif is :: js -> let k = assemble is k in let k' = assemble js k in (asm, Branchif (List.length k' - List.length k)) :: k' | Branch _ :: _ -> assert false | [] -> k

Think of the arguments `asm`

and `k`

as one unique list `asm @ k`

that is “open” for insertion in two places: at top-level, as usual, and in the middle, between `asm`

and `k`

. The `k`

part is the already-processed suffix, and `asm`

is what remains to be processed. The first case inserts the non-branching instructions `Push, Add, Halt`

at top-level in the output (together with their corresponding assembly suffix of course). The second one, `Branchif`

, begins by inserting the branch `is`

at top-level, and then inserts the remainder `js`

in front of it. Note that when assembling this remainder, we can discover sharing that was recorded in `k`

when compiling the branch. Note also that there can’t be any `Branch`

in the assembly since it would not make much sense (everything after a `Branch`

instruction would be dead code), hence the `assert false`

.

Finally, we can strip off the “cached” information in the returned list, keeping only the target instructions:

let assemble is = snd (List.split (assemble is []))

That’s it, we have a complete compilation chain for our expression language! We can execute the target code on this machine:

let rec exec = function | s, Push i :: c -> exec (i :: s, c) | i :: j :: s, Add :: c -> exec (i + j :: s, c) | s, Branch n :: c -> exec (s, List.drop n c) | i :: s, Branchif n :: c -> exec (s, List.drop (if i<>0 then n else 0) c) | [i], Halt :: _ -> i | _ -> failwith "error" let exec c = exec ([], c)

The idea of using labels that are actual pointers to the code seems quite natural and seems to scale well (I implemented a compiler from a mini-ML to a virtual machine close to OCaml’s bytecode). In terms of performance however, `assemble`

is quadratic: before assembling each instruction, we look up if we didn’t assemble it already. When we have real (string) labels, we can represent the “cache” as a data structure with faster lookup; unfortunately, if labels are pointers, we can’t really do this because we don’t have a total order on pointers, only equality `(==)`

.

This is only one example of how we can exploit pointer equality in OCaml to mimick a name generator. I’m sure there are lots of other applications to be discovered, or that I don’t know of (off the top of my head: to represent variables in the lambda-calculus). The big unknown for me is the nature of the language we’ve been working in, functional OCaml + pointer equality. Can we still consider it a functional language? How to reason on its programs? The comment section is right below!

]]>