Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Leaf Petersen is active.

Publication


Featured researches published by Leaf Petersen.


symposium on principles of programming languages | 2003

A type theory for memory allocation and data layout

Leaf Petersen; Robert Harper; Karl Crary; Frank Pfenning

Ordered type theory is an extension of linear type theory in which variables in the context may be neither dropped nor re-ordered. This restriction gives rise to a natural notion of adjacency. We show that a language based on ordered types can use this property to give an exact account of the layout of data in memory. The fuse constructor from ordered logic describes adjacency of values in memory, and the mobility modal describes pointers into the heap. We choose a particular allocation model based on a common implementation scheme for copying garbage collection and show how this permits us to separate out the allocation and initialization of memory locations in such a way as to account for optimizations such as the coalescing of multiple calls to the allocator.


symposium on principles of programming languages | 2006

A verifiable SSA program representation for aggressive compiler optimization

Vijay Menon; Neal Glew; Brian R. Murphy; Andrew McCreight; Tatiana Shpeisman; Ali-Reza Adl-Tabatabai; Leaf Petersen

We present a verifiable low-level program representation to embed, propagate, and preserve safety information in high perfor-mance compilers for safe languages such as Java and C#. Our representation precisely encodes safety information via static single-assignment (SSA) [11, 3] proof variables that are first-class constructs in the program.We argue that our representation allows a compiler to both (1) express aggressively optimized machine-independent code and (2) leverage existing compiler infrastructure to preserve safety information during optimization. We demonstrate that this approach supports standard compiler optimizations, requires minimal changes to the implementation of those optimizations, and does not artificially impede those optimizations to preserve safety. We also describe a simple type system that formalizes type safety in an SSA-style control-flow graph program representation. Through the types of proof variables, our system enables compositional verification of memory safety in optimized code. Finally, we discuss experiences integrating this representation into the machine-independent global optimizer of STARJIT, a high-performance just-in-time compiler that performs aggressive control-flow, data-flow, and algebraic optimizations and is competitive with top production systems.


types in languages design and implementation | 2003

Typed compilation of recursive datatypes

Joseph C. Vanderwaart; Derek Dreyer; Leaf Petersen; Karl Crary; Robert Harper; Perry Cheng

Standard ML employs an opaque (or generative) semantics of datatypes, in which every datatype declaration produces a new type that is different from any other type, including other identically defined datatypes. A natural way of accounting for this is to consider datatypes to be abstract. When this interpretation is applied to type-preserving compilation, however, it has the unfortunate consequence that datatype constructors cannot be inlined, substantially increasing the run-time cost of constructor invocation compared to a traditional compiler. In this paper we examine two approaches to eliminating function call overhead from datatype constructors. First, we consider a transparent interpretation of datatypes that does away with generativity, altering the semantics of SML; and second, we propose an interpretation of datatype constructors as coercions, which have no run-time effect or cost and faithfully implement SML semantics.


types in languages design and implementation | 2005

Strict bidirectional type checking

Adam Chlipala; Leaf Petersen; Robert Harper

Completely annotated lambda terms (such as are arrived at via the straightforward encodings of various types from System F) contain much redundant type information. Consequently, the completely annotated forms are almost never used in practice, since partially annotated forms can be defined which still allow syntax directed type checking. An additional optimization that is used in some proof and type systems is to take advantage of the context of occurrence of terms to further elide type information using bidirectional type checking rules. While this technique is generally effective, we show that there exist bidirectional terms which exhibit asymptotic increases in the size of their type decorations when sequentialized into a named-form calculus (a common first step in compilation). In this paper, we introduce a refinement of the bidirectional type system based on strict logic which allows additional type decorations to be eliminated, and show that it is well-behaved under sequentialization.


languages and compilers for parallel computing | 2007

Pillar: A Parallel Implementation Language

Todd A. Anderson; Neal Glew; Peng Guo; Brian T. Lewis; Wei Liu; Zhanglin Liu; Leaf Petersen; Mohan Rajagopalan; James M. Stichnoth; Gansha Wu; Dan Zhang

As parallelism in microprocessors becomes mainstream, new prog- ramming languages and environments are emerging to meet the challenges of parallel programming. To support research on these languages, we are developing a low-level language infrastructure called Pillar(derived from Parallel Implementation Language). Although Pillar programs are intended to be automatically generated from source programs in each parallel language, Pillar programs can also be written by expert programmers. The language is defined as a small set of extensions to C. As a result, Pillar is familiar to C programmers, but more importantly, it is practical to reuse an existing optimizing compiler like gcc [1] or Open64 [2] to implement a Pillar compiler. Pillars concurrency features include constructs for threading, synchronization, and explicit data-parallel operations. The threading constructs focus on creating new threads only when hardware resources are idle, and otherwise executing parallel work within existing threads, thus minimizing thread creation overhead. In addition to the usual synchronization constructs, Pillar includes transactional memory. Its sequential features include stack walking, second-class continuations, support for precise garbage collection, tail calls, and seamless integration of Pillar and legacy code. This paper describes the design and implementation of the Pillar software stack, including the language, compiler, runtime, and high-level converters(that translate high-level language programs into Pillar programs). It also reports on early experience with three high-level languages that target Pillar.


symposium/workshop on haskell | 2013

The Intel labs Haskell research compiler

Hai Liu; Neal Glew; Leaf Petersen; Todd A. Anderson

The Glasgow Haskell Compiler (GHC) is a well supported optimizing compiler for the Haskell programming language, along with its own extensions to the language and libraries. Haskells lazy semantics imposes a runtime model which is in general difficult to implement efficiently. GHC achieves good performance across a wide variety of programs via aggressive optimization taking advantage of the lack of side effects, and by targeting a carefully tuned virtual machine. The Intel Labs Haskell Research Compiler uses GHC as a frontend, but provides a new whole-program optimizing backend by compiling the GHC intermediate representation to a relatively generic functional language compilation platform. We found that GHCs external Core language was relatively easy to use, but reusing GHCs libraries and achieving full compatibility were harder. For certain classes of programs, our platform provides substantial performance benefits over GHC alone, performing 2x faster than GHC with the LLVM backend on selected modern performance-oriented benchmarks; for other classes of programs, the benefits of GHCs tuned virtual machine continue to outweigh the benefits of more aggressive whole program optimization. Overall we achieve parity with GHC with the LLVM backend. In this paper, we describe our Haskell compiler stack, its implementation and optimization approach, and present benchmark results comparing it to GHC.


international conference on functional programming | 2013

Automatic SIMD vectorization for Haskell

Leaf Petersen; Dominic A. Orchard; Neal Glew

Expressing algorithms using immutable arrays greatly simplifies the challenges of automatic SIMD vectorization, since several important classes of dependency violations cannot occur. The Haskell programming language provides libraries for programming with immutable arrays, and compiler support for optimizing them to eliminate the overhead of intermediate temporary arrays. We describe an implementation of automatic SIMD vectorization in a Haskell compiler which gives substantial vector speedups for a range of programs written in a natural programming style. We compare performance with that of programs compiled by the Glasgow Haskell Compiler.


compiler construction | 2012

GC-Safe interprocedural unboxing

Leaf Petersen; Neal Glew

Modern approaches to garbage collection (GC) require information about which variables and fields contain GC-managed pointers. Interprocedural flow analysis can be used to eliminate otherwise unnecessary heap allocated objects (unboxing), but must maintain the necessary GC information. We define a core language which models compiler correctness with respect to the GC, and develop a correctness specification for interprocedural unboxing optimizations. We prove that any optimization which satisfies our specification will preserve GC safety properties and program semantics, and give a practical unboxing algorithm satisfying this specification.


Proceedings of the 2013 ACM SIGPLAN workshop on Dependently-typed programming | 2013

A multivalued language with a dependent type system

Neal Glew; Tim Sweeney; Leaf Petersen

Type systems are used to eliminate certain classes of errors at compile time. One of the goals of type system research is to allow more classes of errors (such as array subscript errors) to be eliminated. Dependent type systems have played a key role in this effort, and much research has been done on them. In this paper, we describe a new dependently-typed functional programming language based on two key ideas. First, it makes no distinction between expressions, types, kinds, and sorts-everything is a term. The same integer values are used to compute with and to index types, such as specifying the length of an array. Second, the term language has a multivalued semantics-a term can evaluate to zero, one, multiple, even an infinite number of values. Since types are characterised by their members, they are equivalent to terms whose possible values are the members of the type, and we exploit this to express type information in our language. In order to type check such terms, we give up on decidability. We consider this a good tradeoff to get an expressive language without the pain of some dependent type systems. This paper describes the core ideas of the language, gives an intuitive description of the semantics in terms of set-theory, explains how to implement the language by restricting what programs are considered valid, and sketches the core of the type system.


european conference on computer systems | 2007

Enabling scalability and performance in a large scale CMP environment

Bratin Saha; Ali-Reza Adl-Tabatabai; Anwar M. Ghuloum; Mohan Rajagopalan; Richard L. Hudson; Leaf Petersen; Vijay Menon; Brian R. Murphy; Tatiana Shpeisman; Eric Sprangle; Anwar Rohillah; Doug Carmean; Jesse Fang

Collaboration


Dive into the Leaf Petersen's collaboration.

Researchain Logo
Decentralizing Knowledge