Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Neal Glew is active.

Publication


Featured researches published by Neal Glew.


symposium on principles of programming languages | 1998

From system F to typed assembly language

J. Gregory Morrisett; David Walker; Karl Crary; Neal Glew

We motivate the design of a statically typed assembly language (TAL) and present a type-preserving translation from System F to TAL. The TAL we present is based on a conventional RISC assembly language, but its static type system provides support for enforcing high-level language abstractions, such as closures, tuples, and objects, as well as user-defined abstract data types. The type system ensures that well-typed programs cannot violate these abstractions. In addition, the typing constructs place almost no restrictions on low-level optimizations such as register allocation, instruction selection, or instruction scheduling.Our translation to TAL is specified as a sequence of type-preserving transformations, including CPS and closure conversion phases; type-correct source programs are mapped to type-correct assembly language. A key contribution is an approach to polymorphic closure conversion that is considerably simpler than previous work. The compiler and typed assembly language provide a fully automatic way to produce proof carrying code, suitable for use in systems where untrusted and potentially malicious code must be checked for safety before execution.


Journal of Functional Programming | 2002

Stack-based typed assembly language

J. Gregory Morrisett; Karl Crary; Neal Glew; David Walker

This paper presents STAL, a variant of Typed Assembly Language with constructs and types to support a limited form of stack allocation. As with other statically-typed low-level languages, the type system of STAL ensures that a wide class of errors cannot occur at run time, and therefore the language can be adapted for use in certifying compilers where security is a concern. Like the Java Virtual Machine Language (JVML), STAL supports stack allocation of local variables and procedure activation records, but unlike the JVML, STAL does not pre-suppose fixed notions of procedures, exceptions, or calling conventions. Rather, compiler writers can choose encodings for these high-level constructs using the more primitive RISC-like mechanisms of STAL. Consequently, some important optimizations that are impossible to perform within the JVML, such as tail call elimination or callee-saves registers, can be easily expressed within STAL.


symposium on principles of programming languages | 1999

Type-safe linking and modular assembly language

Neal Glew; J. Gregory Morrisett

Linking is a low-level task that is usually vaguely specified, if at all, by language definitions. However, the security of web browsers and other extensible systems depends crucially upon a set of checks that must be performed at link time. Building upon the simple, but elegant ideas of Cardelli, and module constructs from high-level languages, we present a formal model of typed object files and a set of inference rules that are sufficient to guarantee that type safety is preserved by the linking process.Whereas Cardellis link calculus is built on top of the simply-typed lambda calculus, our object files are based upon typed assembly language so that we may model important low-level implementation issues. Furthermore, unlike Cardelli, we provide support for abstract types and higher-order type constructors-features critical for building extensible systems or modern programming languages such as ML.


Lecture Notes in Computer Science | 1998

Stack-Based Typed Assembly Language

J. Gregory Morrisett; Karl Crary; Neal Glew; David Walker

In previous work, we presented a Typed Assembly Language (TAL). TAL is sufficiently expressive to serve as a target language for compilers of high-level languages such as ML. This work assumed such a compiler would perform a continuation-passing style transform and eliminate the control stack by heap-allocating activation records. However, most compilers are based on stack allocation. This paper presents STAL, an extension of TAL with stack constructs and stack types to support the stack allocation style. We show that STAL is sufficiently expressive to support languages such as Java, Pascal, and ML; constructs such as exceptions and displays; and optimizations such as tail call elimination and callee-saves registers. This paper also formalizes the typing connection between CPS-based compilation and stack-based compilation and illustrates how STAL can formally model calling conventions by specifying them as formal translations of source function types to STAL types.


Concurrency and Computation: Practice and Experience | 2005

The Open Runtime Platform: a flexible high‐performance managed runtime environment

Michal Cierniak; Marsha Eng; Neal Glew; Brian T. Lewis; James M. Stichnoth

The Open Runtime Platform (ORP) is a high‐performance managed runtime environment (MRTE) that features exact generational garbage collection, fast thread synchronization, and multiple coexisting just‐in‐time compilers (JITs). ORP was designed for flexibility in order to support experiments in dynamic compilation, garbage collection, synchronization, and other technologies. It can be built to run either Java or Common Language Infrastructure (CLI) applications, to run under the Windows or Linux operating systems, and to run on the IA‐32 or Itanium processor family (IPF) architectures. Achieving high performance in a MRTE presents many challenges, particularly when flexibility is a major goal. First, to enable the use of different garbage collectors and JITs, each component must be isolated from the rest of the environment through a well‐defined software interface. Without careful attention, this isolation could easily harm performance. Second, MRTEs have correctness and safety requirements that traditional languages such as C++ lack. These requirements, including null pointer checks, array bounds checks, and type checks, impose additional runtime overhead. Finally, the dynamic nature of MRTEs makes some traditional compiler optimizations, such as devirtualization of method calls, more difficult to implement or more limited in applicability. To get full performance, JITs and the core virtual machine (VM) must cooperate to reduce or eliminate (where possible) these MRTE‐specific overheads. In this paper, we describe the structure of ORP in detail, paying particular attention to how it supports flexibility while preserving high performance. We describe the interfaces between the garbage collector, the JIT, and the core VM; how these interfaces enable multiple garbage collectors and JITs without sacrificing performance; and how they allow the JIT and the core VM to reduce or eliminate MRTE‐specific performance issues. Copyright


international conference on functional programming | 1999

Type dispatch for named hierarchical types

Neal Glew

Type dispatch constructs are an important feature of many programming languages. Scheme has predicates for testing the runtime type of a value. Java has a class cast expression and a try statement for switching on an exceptions class. Crucial to these mechanisms, in typed languages, is type refinement: The static type system will use type dispatch to refine types in successful branches. Considerable previous work has addressed type case constructs for structural type systems without subtyping, but these do not extend to named type systems with subtyping, as is common in object oriented languages. Previous work on type dispatch in named type systems with subtyping has not addressed its implementation formally.This paper describes a number of type dispatch constructs that share a common theme: class cast and class case constructs in object oriented languages, ML style exceptions, hierarchical extensible sums, and multimethods. I describe a unifying mechanism, tagging, that abstracts the operation of these constructs, and I formalise a small tagging language. After discussing how to implement the tagging language, I present a typed language without type dispatch primitives, and a give a formal translation from the tagging language.


symposium on principles of programming languages | 2006

A verifiable SSA program representation for aggressive compiler optimization

Vijay Menon; Neal Glew; Brian R. Murphy; Andrew McCreight; Tatiana Shpeisman; Ali-Reza Adl-Tabatabai; Leaf Petersen

We present a verifiable low-level program representation to embed, propagate, and preserve safety information in high perfor-mance compilers for safe languages such as Java and C#. Our representation precisely encodes safety information via static single-assignment (SSA) [11, 3] proof variables that are first-class constructs in the program.We argue that our representation allows a compiler to both (1) express aggressively optimized machine-independent code and (2) leverage existing compiler infrastructure to preserve safety information during optimization. We demonstrate that this approach supports standard compiler optimizations, requires minimal changes to the implementation of those optimizations, and does not artificially impede those optimizations to preserve safety. We also describe a simple type system that formalizes type safety in an SSA-style control-flow graph program representation. Through the types of proof variables, our system enables compositional verification of memory safety in optimized code. Finally, we discuss experiences integrating this representation into the machine-independent global optimizer of STARJIT, a high-performance just-in-time compiler that performs aggressive control-flow, data-flow, and algebraic optimizations and is competitive with top production systems.


conference on object-oriented programming systems, languages, and applications | 2000

An efficient class and object encoding

Neal Glew

An object encoding translates a language with object primitives to one without. Similarly, a class encoding translates classes into other primitives. Both are important theoretically for comparing the expressive power of languages and for transferring results from traditional languages to those with objects and classes. Both are also important foundations for the implementation of object-oriented languages as compilers typically include a phase that performs these translations.This paper describes a language with a primitive notion of classes and objects and presents an encoding of this language into one with records and functions. The encoding uses two techniques often used in compilers for single-inheritance class-based object-oriented languages: the self-application semantics and the method-table technique. To type the output of the encoding, the encoding uses a new formulation of self quantifiers that is more powerful than previous approaches.


Electronic Notes in Theoretical Computer Science | 1999

Object Closure Conversion

Neal Glew

Abstract An integral part of implementing functional languages is closure conversion—the process of converting code with free variables into closed code and auxiliary data structures. Closure conversion has been extensively studied in this context, but also arises in languages with first-class objects. In fact, one variant of Javas inner classes are an example of objects that need to be closure converted, and the transformation for converting these inner classes into Java Virtual Machine classes is an example of closure conversion. This paper argues that a direct formulation of object closure conversion is interesting and gives further insight into general closure conversion. It presents a formal closure-conversion translation for a second-order object language and proves it correct. The translation and proof generalise to other object-oriented languages, and the paper gives some examples to support this statement. Finally, the paper discusses the well known connection between function closures and single-method objects. This connection is formalised by showing that an encoding of functions into objects, object closure conversion, and various object encodings compose to give various closure-conversion translations for functions.


languages and compilers for parallel computing | 2007

Pillar: A Parallel Implementation Language

Todd A. Anderson; Neal Glew; Peng Guo; Brian T. Lewis; Wei Liu; Zhanglin Liu; Leaf Petersen; Mohan Rajagopalan; James M. Stichnoth; Gansha Wu; Dan Zhang

As parallelism in microprocessors becomes mainstream, new prog- ramming languages and environments are emerging to meet the challenges of parallel programming. To support research on these languages, we are developing a low-level language infrastructure called Pillar(derived from Parallel Implementation Language). Although Pillar programs are intended to be automatically generated from source programs in each parallel language, Pillar programs can also be written by expert programmers. The language is defined as a small set of extensions to C. As a result, Pillar is familiar to C programmers, but more importantly, it is practical to reuse an existing optimizing compiler like gcc [1] or Open64 [2] to implement a Pillar compiler. Pillars concurrency features include constructs for threading, synchronization, and explicit data-parallel operations. The threading constructs focus on creating new threads only when hardware resources are idle, and otherwise executing parallel work within existing threads, thus minimizing thread creation overhead. In addition to the usual synchronization constructs, Pillar includes transactional memory. Its sequential features include stack walking, second-class continuations, support for precise garbage collection, tail calls, and seamless integration of Pillar and legacy code. This paper describes the design and implementation of the Pillar software stack, including the language, compiler, runtime, and high-level converters(that translate high-level language programs into Pillar programs). It also reports on early experience with three high-level languages that target Pillar.

Collaboration


Dive into the Neal Glew's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Karl Crary

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jens Palsberg

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge