### Reverse natural deduction and get sequent calculus

#### by Matthias Puech

This is a follow-up on my previous post. It should be readable by itself if you just take a quick peek at *herds*.

Today, we are going to write type-checkers. And rewrite them. Again and again. Eventually, I’ll put in evidence that what seemed to be a programming hack in the last post turns out to be the difference between two well-known equivalent formulations of first-order logic, your good ol’ natural deduction and sequent calculus.

### Bi-directional type-checking

We shall here start by writing a type-checker for the usual simply typed lambda-calculus, natural deduction-style. Types are:

type tp = | Nat | Arr of tp * tp

Let us make a *huge* simplification right away, that will probably make more than one reader jump on their seat: our language will be *normal* (canonical). Yes, no redexes. What’s the use of checking well-typing (that is, termination) if you can’t write redexes? The quick answer is to look at any reference to a modern metatheory of LF (here for instance). The still-very-quick answer is that it is a simplification that will need to be lifted, but is already useful: you can’t write redexes, but you can implement the substitution function as an *admissible* process. Having redexes in the syntax and eliminating them on one side, and having no redexes but instead a function to compute their result is the difference between *cut-elimination* and *cut-admissibility*. On one hand, you will want to prove, as usual, that the iteration of the cut-elimination procedure terminates on well-typed terms, on the other that one admissible cut procedure (sometimes called hereditary substitution) terminates given a well-typed term with a free variable , and a substituend .

The normal syntax looks like this:

(canonical terms)

(atomic terms)

or in OCaml, adopting de Bruijn notation:

type m = | Lam of m | At of r and r = | Var of int | App of r * m

One of the advantage of this restriction is that there is a correspondence between these two syntactic categories and the two judgments for type-checking (hence the name, bi-directional):

(check that term has type )

(infer type for )

or in OCaml:

val check : tp list -> m * tp -> unit val infer : tp list -> r -> tp

Instead of showing the rules, I’ll give the code directly:

let rec check env : m * tp -> unit = function | Lam n, Arr(t, u) -> check (t :: env) (n, u) | At a, t -> if infer env a = t then () else failwith "not the awaited type" | _ -> failwith "type mismatch" and infer env : r -> tp = function | Var i -> List.nth env i | App (a, n) -> match infer env a with | Arr (t, u) -> check env (n, t); u | Nat -> failwith "too many arguments"

This should be pretty self-explanatory.

### Reversing atomic terms

Let’s now have a closer look at the type of atomic terms, and recognize the exact structure of a `Herd.t`

from last post. It is precisely a `(m, int) Herd.t`

. Looking at function infer, it is a simple bottom-up traversal of that structure. So let us refactor our previous code as:

type m = | Lam of m | At of r and r = (m, int) Herd.t let rec check env : m * tp -> unit = function | Lam n, Arr(t, u) -> check (t :: env) (n, u) | At a, t -> if infer env a = t then () else failwith "not the awaited type" | _ -> failwith "type mismatch" and infer env : r -> tp = Herd.fold_right (fun n t -> match t with | Arr (t, u) -> check env (n, t); u | Nat -> failwith "too many arguments" ) List.nth env

Function `check`

doesn’t change, but `infer`

is expressed as a `fold_right`

. As we remarked last time, this function is not tail-recursive, but it is equivalent to a `fold_left'`

on the reversed `Herd`

. We’d better use the latter then if we care about the size of our stack. We get the new, tail-rec version:

type m = | Lam of m | At of r and r = (m, int) Herd.t' let rec check env : m * tp -> unit = function | Lam n, Arr(t, u) -> check (t :: env) (n, u) | At a, t -> if infer env a = t then () else failwith "not the awaited type" | _ -> failwith "type mismatch" and infer env : r -> tp = Herd.fold_left' (fun t n -> match t with | Arr (t, u) -> check env (n, t); u | Nat -> failwith "too many arguments" ) List.nth env

Here is our tail-rec type-checker. What just happened here? Let’s unfold the `Herd`

abstraction:

type m = | Lam of m | At of r and r = int * s and s = | Cons of m * s | Nil let rec check env = function | Lam n, Arr(t, u) -> check (t :: env) (n, u) | At a, t -> if infer env a = t then () else failwith "not the awaited type" | _ -> failwith "type mismatch" and infer env : r -> tp = function | i, l -> thread env (List.nth env i) l and thread env t : s -> tp = function | Nil -> t | Cons (n, s) -> match t with | Arr (t, u) -> check env (n, t); thread env u s | Nat -> failwith "too many arguments"

What is this language of terms?

It is a lambda calculus where application is n-ary: this way you get direct access to the functional part of the application, and arguments appear in “natural” order (the nearest to the function on top). This trick is called *spine calculus* by the Twelf people, and was chosen for their term representation (read this) because it is more efficient when proof-searching or unifying. But we’ll give it another name. Let’s reconstitute the typing rules from the code of the checker. A new form of judgment appears with the function `thread`

used to parse these n-ary applications, that we will write . It has a distinguished formula on the left of the sequent corresponding to the current functional’s type. We get:

This system, called the normal (in this paper), can be viewed as a restriction to normal forms of… a restriction of the usual sequent calculus: try to erase the terms in these rules, you’ll recognize both rules for implication! The second restriction is due to the new judgement: it is not allowed to use a rule as the right premise of a rule for instance: once you focus on a premise, you have to treat it until the end (of the spine). It is called LJT, and has the same expressive power as the traditional sequent calculus though.

### Conclusion

So, rewriting the history of logics, I hope I convinced you how (intuitionistic) sequent calculus could have been invented by a hacker, as a tail-recursive version of natural deduction.

The same optimization is extensible to conjunctions, but unfortunately it fails to recover the disjunction rule of LJT. This should be adressed…

Remains to show too how this influences non-normal terms. Does it even make sense when we don’t have the nice correspondence between syntax and judgements? If we pass this difficulty, the next step would be to write an interpreter. What will the most convenient term representation be then? Reversed or not reversed?

A last remark: the same trick of reversion can be done on ‘s too (which are a ` (a, m) Herd.t`

), so as to see lambdas in a n-ary way. What kind of logic do we get? What do we optimize?

Thanks for reading!

[…] that should be of interest to those who enjoyed my posts about reversing data structures and the relation between natural deduction and sequent calculus. It is an article submitted to APLAS 2013, and it is called Proofs, upside down. In a nutshell, I […]