Wednesday, September 18, 2013

Expressive power of programming languages

On the expressive power of programming languages
Matthias Felleisen
Science of Computer Programming 17:1-3(35-75), Dec. 1991

On absraction and the expressive power of programming langauges
John C. Mitchell
Science of Computer Programming 21:2(141-163), Oct. 1993

It is a typical observation in an introductory programming language class that language constructs such as "let" are dispensable in a language with first-class lambda-abstraction, or that different forms of loops, such as while, for, or do-while in C/C++/Java, are interdefinable.  However, the formal content of these observations is not always made clear.

These two papers appeared in the early 90s and seem to describe independently-developed notions of expressiveness and "meaning-preserving translation" among programming languages.  Both notions seek to draw finer distinctions than Turing-completeness - that is, we want to identify a sense in which one Turing-complete language is "more expressive" than another, even though both languages can express the same computable functions over natural numbers (and by Gödel numbering, each can define an evaluator for the other).



Felleisen considers two notions of simulation.  Both start by assuming that we have two languages,  $L_1$ and $L_2$, each equipped with a termination predicate $eval$, and $L_1$ is a conservative extension of $L_2$.  Then we say $L_2$ can express $L_1$ if there is a function from $L_1$ programs to $L_2$ programs that commutes with the common constructs of $L_1$ and $L_2$ and preserves termination behavior. We say that $L_2$ can macro-express $L_1$ if the translation is of a specific form: each construct $F(-,...,-)$ in $L_1$ and not $L_2$ can be translated to $L_2$ by replacing $F(e_1,\ldots,e_n)$ with some fixed $L_2$-context $E_F(e_1,\ldots,e_n)$.

One possible drawback of both approaches is the need to frame one language as a conservative extension of the other.  This makes it difficult to relate two different ways of implementing the same feature unless we can identify a single common language subsuming both.  One nice thing about Felleisen's model is that it takes a fairly abstract view of the languages, although macro-expressiveness abandons this position somewhat in requiring the translation to have a specific form (defined in terms of the contexts of the target language).  This means that, while observational equivalence is introduced as a technique useful for proving expressiveness relationships, it is not integral to the definition of expressiveness itself.

Felleisen goes on to give examples, ranging from the classical desugaring of let-binding, interdefinablility of various forms of loops, etc. to discuss ML-style let-bound polymorphism, conditionals, control operators and references in Scheme.

Mitchell is motivated by different problems, namely, the main goal of his paper seems to be to demonstrate the issues that can arise from including features such as reflection in the language.  Allowing arbitrary reflection on code means that abstraction boundaries cannot necessarily be respected, which in turn means that reasoning we might perform about a source language may not remain valid in the target language.  Mitchell's notion of expressiveness and reductions among languages is intended to capture this distinction, so that we do not consider a powerful language such as Lisp-with-reflection "more expressive" than some source language that provides stronger abstractions that Lisp-with-reflection does not respect.

At a technical level, the definitions take an abstract view of a language $L = (E,Ct,O,\to)$ as a set of expressions $E$, set of contexts $Ct$,  set of observables $O \subseteq E$, and reduction relation $\to \subseteq E \times E$.  A function $C[e]$ mapping a context $C$ and expression $e$ to an expression obtained by "plugging" $e$ into $C$ is assumed. Observational equivalence is defined as usual:
\[e \equiv e' \iff \forall C \in Ct,P \in O. C[e] \to O \iff C[e'] \to O\]

An abstraction-preserving reduction $\theta$ from $L_1$ to $L_2$ maps  $L_1$ expressions to $L_2$ expressions and $L_1$ contexts to $L_2$ contexts.  It must be compositional modulo observational equivalence:
\[\theta(C[e]) \equiv \theta(C)[\theta(e)]\]
and it must map $L_1$-observationally equivalent expressions to $L_2$-observationally equivalent expressions (and vice versa):
\[e \equiv e' \iff \theta(e) \equiv \theta(e')\]

Note that this is an "if and only if"!

Mitchell goes on to give the standard example of desugaring "let" bindings, then considers relating strong and weak sums (showing that weak sums cannot be simulated by strong sums without violating abstraction), and finally the aforementioned LISP example, showing that any language with a nontrivial abstraction mechanism cannot be simulated by a LISP-like language with reflection.

Of the two papers, Felleisen's seems to have been more widely-cited (particularly in subsequent work on different control operators) and its notions of expressiveness and macro-expressiveness seem more immediate.  Mitchell's approach seems more abstract but also more difficult to work with to establish positive results, due to the need to reason about observational equivalence explicitly.  Perhaps advances in fully abstract models for various languages developed since then (using e.g. game semantics) would help with this.  Mitchell also refers to work by Shapiro (which I haven't yet read) towards comparing the expressiveness of concurrent languages.  So perhaps there has been more work on these issues in the concurrency community.

Labels:

0 Comments:

Post a Comment

Subscribe to Post Comments [Atom]

<< Home