As part of the MetaConc project startup, the Software Languages Lab organized a Languages Implementers seminar to discuss concurrency and transactional memory for garbage collectors, tooling for complex concurrent systems, transactional task parallelism, and trace-based JIT compilation.

Talk titles and abstracts follow below.

Exploring Garbage Collection with Haswell Hardware Transactional Memory,
Richard Jones, University of Kent


Intel’s latest processor microarchitecture, Haswell, adds support for a restricted form of transactional memory to the x86 programming model. We explore how this can be applied to three garbage collection scenarios in Jikes RVM: parallel copying, concurrent copying and bitmap marking. We demonstrate gains in concurrent copying speed over traditional synchronization mechanisms of 48–101%. We also show how similar but portable performance gains can be achieved through software transactional memory techniques. We identify the architectural overhead of capturing sufficient work for transactional execution as a major stumbling block to the effective use of transactions in the other scenarios.

Towards Meta-Level Engineering and Tooling for Complex Concurrent Systems,
Stefan Marr, Johannes Kepler University Linz


Today’s software utilizes the omnipresent parallel architectures in widely varying ways. To address all application requirements, each with the appropriate abstraction, developers mix and match various concurrency abstractions made available to them via libraries and frameworks. Unfortunately, today’s tools such as debuggers and profilers do not support the diversity of these abstractions.

To enable developers to reason about their programs on the higher level of the concurrency abstractions they used for the implementation, we investigate meta-level interfaces for tooling. The goal is to provide support for a wide range of abstractions and to find implementation techniques that allow for efficient debugging or profiling without interfering with a programs execution that could hide for instance data races. In this presentation, we give a brief overview of this newly started research project, and demo first prototypes for debugging actor languages on top of the Truffle Language Framework.

Transactional Tasks: Parallelism in Software Transactions,
Janwillem Swalens, Vrije Universiteit Brussel


Many programming languages, such as Clojure, Scala, and Haskell, support different concurrency models. In practice these models are often combined, however the semantics of the combinations are not always well-defined. We studied the combination of futures and Software Transactional Memory. Currently, futures created within a transaction cannot access the transactional state safely, violating the serializability of the transactions and leading to unexpected behavior.

We define transactional tasks: a construct that allows futures to be created in transactions. Transactional tasks allow the parallelism in a transaction to be exploited, while providing safe access to the state of their encapsulating transaction. We show that transactional tasks have several useful properties: they are coordinated, they maintain serializability, and they do not introduce non-determinism. As such, transactional tasks combine futures and Software Transactional Memory, allowing the potential parallelism of a program to be fully exploited, while preserving the properties of the separate models where possible.

STRAF: A Scala Framework for Experiments in Traced-based JIT Compilation,
Maarten Vandercammen, Vrije Universiteit Brussel


We introduce STRAF, a Scala framework for recording and optimizing execution traces of an interpreter it is composed with. For interpreters that satisfy the requirements detailed in this paper, this composition requires but a small effort from the implementer to result in a trace-based JIT compiler.We describe the framework, and illustrate its composition with a Scheme interpreter that satisfies the aforementioned requirements. We benchmark the resulting trace-based JIT compiler on a set of Scheme programs. Finally, we implement an optimization to demonstrate that STRAF enables further experimentation in the domain.