Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Bowen Alpern is active.

Publication


Featured researches published by Bowen Alpern.


Distributed Computing | 1986

Recognizing Safety and Liveness

Bowen Alpern; Fred B. Schneider

A formal characterization for safety properties and liveness properties is given in terms of the structure of the Buchi automaton that specifies the property. The characterizations permit a property to be decomposed into a safety property and a liveness property whose conjunction is the original. The characterizations also give insight into techniques required to prove a large class of safety and liveness properties.


symposium on principles of programming languages | 1988

Detecting equality of variables in programs

Bowen Alpern; Mark N. Wegman; F. K. Zadeck

paper presents an algorithm for detecting when two computations produce equivalent values. The equivalence of programs, and hence the equivalence of values, is in general undecidable. Thus, the best one can hope to do is to give an efficient algorithm that detects a large subclass of all the possible equivalences in a program. Two variables are said to be equivalent at a point p if those variables contain the same values whenever control reaches p during any possible execution of the program. We will not examine all possible executions of the program. Instead, we will develop a static property called congruence. Congruence implies, but is not implied by, equivalence. Our approach is conservative in that any variables detected to be e:quivalent will in fact be equivalent, but not all equivalences are detected. Previous work has shown how to apply a technique c.alled value numbering in basic blocks [CS70]. Value numbering is essentially symbolic execution on straight-line programs (basic blocks). Symbolic execution implies that two expressions are assumed to be equal only when they consist of the same functions and the corresponding arguments of these functions are equal. An expression DAG is associated with each assignment statement. A hashing algorithm assigns a unique integer, the value number, to each different expression tree. Two variables that are assigned the same integer are guaranteed to be equivalent. After the code


symposium on the theory of computing | 1987

A model for hierarchical memory

Alok Aggarwal; Bowen Alpern; Ashok K. Chandra; Marc Snir

In this paper we introduce the Hierarchical Memory Model (HMM) of computation. It is intended to model computers with multiple levels in the memory hierarchy. Access to memory location x is assumed to take time ⌈ log x ⌉. Tight lower and upper bounds are given in this model for the time complexity of searching, sorting, matrix multiplication and FFT. Efficient algorithms in this model utilize locality of reference by bringing data into fast memory and using them several times before returning them to slower memory. It is shown that the circuit simulation problem has inherently poor locality of reference. The results are extended to HMMs where memory access time is given by an arbitrary (nondecreasing) function. Tight upper and lower bounds are obtained for HMMs with polynomial memory access time; the algorithms for searching, FFT and matrix multiplication are shown to be optimal for arbitrary memory access time. On-line memory management algorithms for the HMM model are also considered. An algorithm that uses LRU policy at the successive “levels” of the memory hierarchy is shown to be optimal.


conference on object-oriented programming systems, languages, and applications | 1999

Implementing jalapeño in Java

Bowen Alpern; Clement Richard Attanasio; Anthony Cocchi; Derek Lieber; Stephen Edwin Smith; Ton Ngo; John J. Barton; Susan Flynn Hummel; Janice C. Sheperd; Mark F. Mergen

Jalapeño is a virtual machine for Java#8482; servers written in Java. A running Java program involves four layers of functionality: the user code, the virtual-machine, the operating system, and the hardware. By drawing the Java / non-Java boundary below the virtual machine rather than above it, Jalapeño reduces the boundary-crossing overhead and opens up more opportunities for optimization. To get Jalapeño started, a boot image of a working Jalapeño virtual machine is concocted and written to a file. Later, this file can be loaded into memory and executed. Because the boot image consists entirely of Java objects, it can be concocted by a Java program that runs in any JVM. This program uses reflection to convert the boot image into Jalapeños object format. A special MAGIC class allows unsafe casts and direct access to the hardware. Methods of this class are recognized by Jalapeños three compilers, which ignore their bytecodes and emit special-purpose machine code. User code will not be allowed to call MAGIC methods so Javas integrity is preserved. A small non-Java program is used to start up a boot image and as an interface to the operating system. Javas programming features — object orientation, type safety, automatic memory management — greatly facilitated development of Jalapeño. However, we also discovered some of the languages limitations.


Ibm Systems Journal | 2005

The Jikes research virtual machine project: building an open-source research community

Bowen Alpern; S. Augart; Stephen M. Blackburn; Maria A. Butrico; A. Cocchi; Pau-Chen Cheng; Julian Dolby; Stephen J. Fink; David Grove; Michael Hind; Kathryn S. McKinley; Mark F. Mergen; J. E. B. Moss; Ton Ngo; Vivek Sarkar

This paper describes the evolution of the JikesTM Research Virtual Machine project from an IBM internal research project, called Jalapeno, into an open-source project. After summarizing the original goals of the project, we discuss the motivation for releasing it as an open-source project and the activities performed to ensure the success of the project. Throughout, we highlight the unique challenges of developing and maintaining an open-source project designed specifically to support a research community.


Algorithmica | 1993

The Uniform Memory Hierarchy Model of Computation

Bowen Alpern; Larry Carter; Ephraim Feig; Ted Selker

TheUniform Memory Hierarchy (UMH) model introduced in this paper captures performance-relevant aspects of the hierarchical nature of computer memory. It is used to quantify architectural requirements of several algorithms and to ratify the faster speeds achieved by tuned implementations that use improved data-movement strategies.A sequential computers memory is modeled as a sequence 〈M0,M1,...〉 of increasingly large memory modules. Computation takes place inM0. Thus,M0 might model a computers central processor, whileM1 might be cache memory,M2 main memory, and so on. For each moduleMu, a busBu connects it with the next larger module Mu+1. All buses may be active simultaneously. Data is transferred along a bus in fixed-sized blocks. The size of these blocks, the time required to transfer a block, and the number of blocks that fit in a module are larger for modules farther from the processor. The UMH model is parametrized by the rate at which the blocksizes increase and by the ratio of the blockcount to the blocksize. A third parameter, the transfer-cost (inverse bandwidth) function, determines the time to transfer blocks at the different levels of the hierarchy.UMH analysis refines traditional methods of algorithm analysis by including the cost of data movement throughout the memory hierarchy. Thecommunication efficiency of a program is a ratio measuring the portion of UMH running time during which M0 is active. An algorithm that can be implemented by a program whose communication efficiency is nonzero in the limit is said to becommunication- efficient. The communication efficiency of a program depends on the parameters of the UMH model, most importantly on the transfer-cost function. Athreshold function separates those transfer-cost functions for which an algorithm is communication-efficient from those that are too costly. Threshold functions for matrix transpose, standard matrix multiplication, and Fast Fourier Transform algorithms are established by exhibiting communication-efficient programs at the threshold and showing that more expensive transfer-cost functions are too costly.A parallel computer can be modeled as a tree of memory modules with computation occurring at the leaves. Threshold functions are established for multiplication ofN×N matrices using up to N2 processors in a tree with constant branching factor.


ACM Transactions on Programming Languages and Systems | 1989

Verifying temporal properties without temporal logic

Bowen Alpern; Fred B. Schneider

An approach to proving temporal properties of concurrent programs that does not use temporal logic as an inference system is presented. The approach is based on using Buchi automata to specify properties. To show that a program satisfies a given property, proof obligations are derived from the Buchi automata specifying that property. These obligations are discharged by devising suitable invariant assertions and variant functions for the program. The approach is shown to be sound and relatively complete. A mutual exclusion protocol illustrates its application.


conference on object-oriented programming systems, languages, and applications | 2001

Efficient implementation of Java interfaces: Invokeinterface considered harmless

Bowen Alpern; Anthony Cocchi; Stephen J. Fink; David Grove

Single superclass inheritance enables simple and efficient table-driven virtual method dispathc. However, virtual method table dispatch does not handle multiple inheritance and interfaces. This complication has led to a widespread misimpression that interface method dispatch is inherently inefficient. This paper argues that with proper implementation techniques, Java interfaces need not be a source of significant performance degradation.


Proceedings of Workshop on Programming Models for Massively Parallel Computers | 1993

Modeling parallel computers as memory hierarchies

Bowen Alpern; Larry Carter; Jeanne Ferrante

A parameterized generic model that captures the features of diverse computer architectures would facilitate the development of portable programs. Specific models appropriate to particular computers are obtained by specifying parameters of the generic model. A generic model should be simple, and for each machine that it is intended to represent, it should have a reasonably accurate specific model. The Parallel Memory Hierarchy (PMH) model of computation uses a single mechanism to model the costs of both interprocessor communication and memory hierarchy traffic. A computer is modeled as a tree of memory modules with processors at the leaves. All data movement takes the form of block transfers between children and their parents. The paper assesses the strengths and weaknesses of the PMH model as a generic model.<<ETX>>


international parallel and distributed processing symposium | 2001

A perturbation-free replay platform for cross-optimized multithreaded applications

Jong-Deok Choi; Bowen Alpern; Ton Ngo; Manu Sridharan; John Vlissides

Development of multithreaded applications is particularly tricky because of their non-deterministic execution behaviors. Tools that support the debugging and performance timing of such applications are needed. Key to the construction of such tools is the ability to repeat the nondeterministic execution behavior of a multithreaded application. A clean separation between the application and the system that runs it facilitates supporting that ability. This paper presents a platform for constructing such tools in a context in which any separation between the application and the underlying system (and between both and the platforms own instrumentation code) has been obscured. DejaVu supports deterministic replay of nondeterministic executions of multithreaded Java programs on the Jalapeno virtual machine (running on a uniprocessor). Jalapeno is written in Java and its optimizing compiler regularly integrates application, virtual machine, and DejaVu instrumentation code into unified machine-code sequences. DejaVu ensures deterministic replay through symmetric instrumentation-side-effect identical instrumentation in both record and replay modes-and remote reflection which exposes the state of an application without perturbing it.

Collaboration


Dive into the Bowen Alpern's collaboration.

Researchain Logo
Decentralizing Knowledge