Computer Science Thesis Proposal

  • Remote Access - Zoom
  • Virtual Presentation - ET
Thesis Proposals

Efficient and Scalable Parallel Functional Programming Through Disentanglement

Researchers have argued for decades that functional programming simplifies parallel programming, in particular by helping programmers avoid difficult concurrency bugs arising from destructive in-place updates. However, parallel functional programs continue to underperform in comparison to parallel programs written in lower-level languages. The difficulty is that functional languages have high demand for memory, and this demand only grows with parallelism, causing traditional parallel memory management techniques to buckle under the increased pressure.

In this thesis, we identify a memory property called disentanglement and develop automatic memory management techniques which exploit disentanglement for improved efficiency and scalability. Disentanglement has broad applicability: (1) it can be guaranteed by construction in functional programs; (2) it is implied by a classic correctness condition, determinacy-race-freedom; and (3) it allows for general reads and writes to memory as long as concurrent threads do not acquire references to each other's allocated data. Additionally, disentanglement can be checked "on-the-fly" (i.e. dynamically during program execution) with low overhead. To exploit disentanglement for improved efficiency, we partition memory into a tree of heaps, mirroring the dynamic nesting of parallel tasks, which allows for allocations and garbage collections to proceed independently and in parallel. We develop multiple garbage collection algorithms in this setting.

There are two significant implementation efforts presented in this thesis. The first is the MPL (“maple") compiler, which extends the Standard ML functional programming language with support for nested parallelism. In MPL, we implement all of our proposed memory management techniques based on disentanglement. The second is a parallel ML benchmark suite containing sophisticated parallel algorithms for a variety of problem domains, as well as a library of key parallel algorithms and data-structures. All of this code is disentangled, which further evidences the wide applicability of our approach, because many of the benchmarks were directly ported from state-of-the-art C/C++ benchmark suites. In empirical evaluations, we show that MPL outperforms prior implementations of parallel ML, and is competitive with modern procedural and imperative languages. These results support our claim that, through disentanglement, parallel functional programming can be efficient and scalable.

Thesis Committee:
Umut Acar (Chair)
Guy Blelloch
Jan Hoffman
Matthew Fluet (Rochester Institute of Technology)
Alex Aiken (Stanford University)

Zoom Participation. See announcement.

For More Information, Please Contact: 
Keywords: