Tuesday, June 11, 2013

Run your research: on the effectiveness of lightweight mechanization

Run your research: on the effectiveness of lightweight mechanization
C. Klein, J. Clements, C. Dimoulas, C. Eastlund, M. Felleisen, M. Flatt, J. McCarthy, S. Tobin-Hochstadt, and R. Findler
POPL 2012

This paper gives an overview of the use of Redex, a Scheme/Racket dialect, as a lightweight framework for formalization of programming language properties, with the goal of using automated/randomized testing to weed out shallow bugs.

The idea of randomized testing for language properties has been around for a long time (the authors note Hanford's work in the 1970s which I wasn't aware of before).  However, most interest over the last 10 years has been on formalization and verification.  The problem with this is that the applicable verification tools (Isabelle, Coq, Twelf) have a high learning curve, and do not necessarily apply in certain circumstances, for example when trying to formalize the behavior of a given, unknown system (e.g. relaxed memory models).  Moreover, it can be unclear whether a verification attempt is unsuccessful because the system is wrong or because the proof techniques/representations being used are unsuitable, and solving the latter kinds of problems can require new research orthogonal to the system being studied.  Failure to prove a result does not always give insight into counterexamples.  Finally, even having a complete formalization leaves open the possibility that the specification is wrong or that errors are introduced in the process of writing up the paper.

So there is strong motivation for complementary techniques that may not guarantee correctness but are applicable and helpful in the common case where the system is incorrect (or "almost" correct).  Some other researchers have proposed systems for automating parts of the associated tasks (Ott for mapping concrete to abstract syntax and LaTeX or Coq/Isabelle code, other systems mentioned include  αML, αProlog, K, and Ruler).  This paper focuses on Redex, which the authors have developed over nine years to include many convenient features such as visualization of reduction steps and mapping to LaTeX.
Read more »

Labels: ,

Wednesday, June 05, 2013

Macroscopes: Models for collective decision making

Ramamoorthy, S., Salamon, A. Z., & Santhanam, R. (2012).  
arXiv preprint arXiv:1204.3860.
This paper is by colleagues in the University of Edinburgh School of Informatics.  I heard about some aspects of it at a recent workshop on computer-mediated social sense-making.

The paper describes some motivating applications where decisions need to be made by multiple agents with access to different subsets of the needed information.  Examples include electronic markets, sensor networks, and program committees (each PC member reviews a subset of the papers).

The paper reviews communication complexity, where (roughly) the idea is for multiple agents to compute some common function while minimizing the amount of communication needed.  Classical communication complexity focuses on uniform situations where $f : \{0,1\}^{kn} \to \{0,1\}$ and there are $k$ agents, each of which have access to $n$ different contiguous bits (and implicitly, each agent knows which bits the other agents know, but not their values).

In this paper the authors consider variations of this situation: first, they consider arbitrary distributions of data to agents, and they consider single-blind vs. double-blind situations.  Single-blind means that agents know what information the other agents have, but not its content, while double blind means they only know what information they have.  (These terms do not necessarily mean anything related to what they would mean in, say, the PC context.)
Read more »

Labels: ,