### LFMTP 2013 and logical frameworks

8th International Workshop on Logical Frameworks and Meta-Langauges: Theory and Practice (LFMTP 2013)

Robert Harper, Furio Honsell, and Gordon Plotkin. 1993. A framework for defining logics.

*J. ACM*40, 1 (January 1993), 143-184.

I'm gratuitously attending ICFP 2013 (i.e., I'm not presenting or helping organize, though I was on the LFMTP PC).

This year's LFMTP celebrated the 20th anniversary of the publication of Harper, Honsell and Plotkin's "A framework for defining logics". I want to say a few words about that work and draw a line connecting it to currently exciting activities in the functional PL world, many of which were presented yesterday at ICFP.

Harper, Honsell and Plotkin introduced LF, which is a dependent type theory (essentially the same as what is often also called $\lambda$P, one of the corners of the lambda cube). Expressions are classified into kinds, types, and objects; kinds classify types; types classify objects. Types can depend on objects and kinds on types.

The insight underlying LF is that these features conspire to provide an extremely elegant and straightforward way of defining (a wide class of reasonable) logics. Types can be used to define the syntax of the language, using function types to represent variable binding, following the approach of higher-order abstract syntax, which was introduced earlier in work by Miller and Nadathur (but the basic idea goes back to Church.) That is (using a concrete syntax typically used in implemntations), you can define the syntax of classical first-order logic like this:

prop : type.

exp : type.

s : exp -> exp.

z : exp.

and : prop -> prop -> prop.

forall : (exp -> prop) -> prop.

false : prop.

eq : exp -> exp -> prop.

Since types can depend on terms, you can also define provability as a (dependent) type:

pf : prop -> type.

andI : pf A -> pf B -> pf (and A B).

andE1 : pf (and A B) -> pf A.

andE2 : pf (and A B) -> pf B.

allI : {x}pf (A x) -> pf (all A).

allE : pf (all A) -> pf (A M)

falseE: pf false -> pf A.

eqI : pf (eq A A).

eqE : pf (eq A B) -> pf (P A) -> pf (P B).

...

These proofs correspond roughly to Prawitz's natural deduction style, but other styles (Hilbert, Gentzen) are possible too. Implementations, such as Twelf, can interpret these specifications as logic programs or even carry out (in fact, much of the reasoning can be codified explicitly as additional dependently typed constants.)

It's hard to overstate how influential this work has been on programming languages and verification, and on my career path: for example, LF was one of the basic ingredients in Necula and Lee's proof-carrying code system. I was studying at Carnegie Mellon at the time this work was being done and was fortunate to get a paid summer project to work on proof compression for LF, which in turn probably helped my case when applying to graduate schools. Later, when I ran across Gabbay and Pitts' work on nominal techniques for abstract syntax, I recognized that it had some potential for overcoming some limitations of encodings in LF and pursued this for my PhD research. Though this development hasn't proved to be popular with the "mainstream" LF community, it has at least led some researchers to consider expressiveness issues. LF and tools based on it, such as Twelf, Beluga and Delphin; techniques developed in LF have influenced on other tools such as Coq, (Nominal) Isabelle and Abella.

LFMTP included two invited talks, by Dale Miller (concerning proof certificates via focusing) and Furio Honsell (giving a perspective on logical frameworks generally and discussing a particular approach to making LF extensible that he and his colleagues have been working on). There were several more talks, three on recent developments in traditional LF-like settings and two concerning focusing and proof search.

At the same time, dependently typed programming seems to (continue to) be a popular topic at ICFP. There was one session solely devoted to dependent types, and another in which dependent typing or related encodings of program invariants in types played an important role, and I believe many papers are based on formalizations in Twelf, Coq, Isabelle or other tools also (though this hasn't been mentioned in many talks so far). I'll write separate posts about ICFP later (this one is getting too long).

Labels: logical frameworks, mechanized metatheory, trip report

## 0 Comments:

## Post a Comment

Subscribe to Post Comments [Atom]

## Links to this post:

Create a Link

<< Home