Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Laurie J. Hendren is active.

Publication


Featured researches published by Laurie J. Hendren.


conference of the centre for advanced studies on collaborative research | 2010

Soot: a Java bytecode optimization framework

Raja Vallée-Rai; Phong Co; Etienne M. Gagnon; Laurie J. Hendren; Patrick Lam; Vijay Sundaresan

This paper presents Soot, a framework for optimizing Java* bytecode. The framework is implemented in Java and supports three intermediate representations for representing Java bytecode: Baf, a streamlined representation of bytecode which is simple to manipulate; Jimple, a typed 3-address intermediate representation suitable for optimization; and Grimp, an aggregated version of Jimple suitable for decompilation. We describe the motivation for each representation, and the salient points in translating from one representation to another. In order to demonstrate the usefulness of the framework, we have implemented intraprocedural and whole program optimizations. To show that whole program bytecode optimization can give performance improvements, we provide experimental results for 12 large benchmarks, including 8 SPECjvm98 benchmarks running on JDK 1.2 for GNU/Linuxtm. These results show up to 8% improvement when the optimized bytecode is run using the interpreter and up to 21% when run using the JIT compiler.


programming language design and implementation | 1994

Context-sensitive interprocedural points-to analysis in the presence of function pointers

Maryam Emami; Rakesh Ghiya; Laurie J. Hendren

This paper reports on the design, implementation, and empirical results of a new method for dealing with the aliasing problem in C. The method is based on approximating the points-to relationships between accessible stack locations, and can be used to generate alias pairs, or used directly for other analyses and transformations. Our method provides context-sensitive interprocedural information based on analysis over invocation graphs that capture all calling contexts including recursive and mutually-recursive calling contexts. Furthermore, the method allows the smooth integration for handling general function pointers in C. We illustrate the effectiveness of the method with empirical results from an implementation in the McCAT optimizing/parallelizing C compiler.


conference on object-oriented programming systems, languages, and applications | 2005

Adding trace matching with free variables to AspectJ

Chris Allan; Pavel Avgustinov; Aske Simon Christensen; Laurie J. Hendren; Sascha Kuzins; Ondrej Lhoták; Oege de Moor; Damien Sereni; Ganesh Sittampalam; Julian Tibble

An aspect observes the execution of a base program; when certain actions occur, the aspect runs some extra code of its own. In the AspectJ language, the observations that an aspect can make are confined to the current action: it is not possible to directly observe the history of a computation.Recently, there have been several interesting proposals for new history-based language features, most notably by Douence et al. and by Walker and Viggers. In this paper, we present a new history-based language feature called tracematches that enables the programmer to trigger the execution of extra code by specifying a regular pattern of events in a computation trace. We have fully designed and implemented tracematches as a seamless extension of AspectJ.A key innovation in our tracematch approach is the introduction of free variables in the matching patterns. This enhancement enables a whole new class of applications in which events can be matched not only by the event kind, but also by the values associated with the free variables. We provide several examples of applications enabled by this feature.After introducing and motivating the idea of tracematches via examples, we present a detailed semantics of our language design, and we derive an implementation from that semantics. The implementation has been realised as an extension of the abc compiler for AspectJ.


compiler construction | 2003

Scaling Java points-to analysis using SPARK

Ondrej Lhoták; Laurie J. Hendren

Most points-to analysis research has been done on different systems by different groups, making it difficult to compare results, and to understand interactions between individual factors each group studied. Furthermore, points-to analysis for Java has been studied much less thoroughly than for C, and the tradeoffs appear very different. We introduce SPARK, a flexible framework for experimenting with points-to analyses for Java. SPARK supports equality- and subset-based analyses, variations in field sensitivity, respect for declared types, variations in call graph construction, off-line simplification, and several solving algorithms. SPARK is composed of building blocks on which new analyses can be based. We demonstrate SPARK in a substantial study of factors affecting precision and efficiency of subset-based points-to analyses, including interactions between these factors. Our results show that SPARK is not only flexible and modular, but also offers superior time/space performance when compared to other points-to analysis implementations.


aspect-oriented software development | 2005

abc: an extensible AspectJ compiler

Pavel Avgustinov; Aske Simon Christensen; Laurie J. Hendren; Sascha Kuzins; Jennifer Lhoták; Ondrej Lhoták; Oege de Moor; Damien Sereni; Ganesh Sittampalam; Julian Tibble

Research in the design of aspect-oriented programming languages requires a workbench that facilitates easy experimentation with new language features and implementation techniques. In particular, new features for AspectJ have been proposed that require extensions in many dimensions: syntax, type checking and code generation, as well as data flow and control flow analyses.The AspectBench Compiler (abc) is an implementation of such a workbench. The base version of abc implements the full AspectJ language. Its frontend is built, using the Polyglot framework, as a modular extension of the Java language. The use of Polyglot gives flexibility of syntax and type checking. The backend is built using the Soot framework, to give modular code generation and analyses.In this paper, we outline the design of abc, focusing mostly on how the design supports extensibility. We then provide a general overview of how to use abc to implement an extension. Finally, we illustrate the extension mechanisms of abc through a number of small, but non-trivial, examples. abc is freely available under the GNU LGPL.


compiler construction | 2000

Optimizing Java Bytecode Using the Soot Framework: Is It Feasible?

Raja Vallée-Rai; Etienne M. Gagnon; Laurie J. Hendren; Patrick Lam; Patrice Pominville; Vijay Sundaresan

This paper presents Soot, a framework for optimizing Java™ bytecode. The framework is implementedin Java and supports three intermediate representations for representing Java bytecode: Baf, a streamlined representation of Javas stack-based bytecode; Jimple, a typed three-address intermediate representation suitable for optimization; and Grimp, an aggregated version of Jimple. Our approach to class file optimization is to first convert the stack-based bytecode into Jimple, a three-address form more amenable to traditional program optimization, and then convert the optimized Jimple back to bytecode. In order to demonstrate that our approach is feasible, we present experimental results showing the effects of processing class files through our framework. In particular, we study the techniques necessary to effectively translate Jimple back to bytecode, without losing performance. Finally, we demonstrate that class file optimization can be quite effective by showing the results of some basic optimizations using our framework. Our experiments were done on ten benchmarks, including seven SPECjvm98 benchmarks, and were executedon five different Java virtual machine implementations.


symposium on principles of programming languages | 1996

Is it a tree, a DAG, or a cyclic graph? A shape analysis for heap-directed pointers in C

Rakesh Ghiya; Laurie J. Hendren

This paper reports on the design and implementation of a practical shape analysis for C. The purpose of the analysis is to aid in the disambiguation of heap-allocated data structures by estimating the shape (Tree, DAG, or Cyclic Graph) of the data structure accessible from each heap-directed pointer. This shape information can be used to improve dependence testing and in parallelization, and to guide the choice of more complex heap analyses.The method has been implemented as a context-sensitive interprocedural analysis in the McCAT conlpiler. Experimental results and observations are given for 16 benchmark programs. These results show that the analysis gives accurate and useful results for an important group of applications.


conference on object-oriented programming systems, languages, and applications | 2000

Practical virtual method call resolution for Java

Vijay Sundaresan; Laurie J. Hendren; Chrislain Razafimahefa; Raja Vallée-Rai; Patrick Lam; Etienne M. Gagnon; Charles Godin

This paper addresses the problem of resolving virtual method and interface calls in Java bytecode. The main focus is on a new practical technique that can be used to analyze large applications. Our fundamental design goal was to develop a technique that can be solved with only one iteration, and thus scales linearly with the size of the program, while at the same time providing more accurate results than two popular existing linear techniques, class hierarchy analysis and rapid type analysis.We present two variations of our new technique, variable-type analysis and a coarser-grain version called declared-type analysis. Both of these analyses are inexpensive, easy to implement, and our experimental results show that they scale linearly in the size of the program.We have implemented our new analyses using the Soot frame-work, and we report on empirical results for seven benchmarks. We have used our techniques to build accurate call graphs for complete applications (including libraries) and we show that compared to a conservative call graph built using class hierarchy analysis, our new variable-type analysis can remove a significant number of nodes (methods) and call edges. Further, our results show that we can improve upon the compression obtained using rapid type analysis.We also provide dynamic measurements of monomorphic call sites, focusing on the benchmark code excluding libraries. We demonstrate that when considering only the benchmark code, both rapid type analysis and our new declared-type analysis do not add much precision over class hierarchy analysis. However, our finer-grained variable-type analysis does resolve significantly more call sites, particularly for programs with more complex uses of objects.


ACM Transactions on Programming Languages and Systems | 1995

Supporting dynamic data structures on distributed-memory machines

Anne Rogers; Martin C. Carlisle; John H. Reppy; Laurie J. Hendren

Compiling for distributed-memory machines has been a very active research area in recent years. Much of this work has concentrated on programs that use arrays as their primary data structures. To date, little work has been done to address the problem of supporting programs that use pointer-based dynamic data structures. The techniques developed for supporting SPMD execution of array-based programs rely on the fact that arrays are statically defined and directly addressable. Recursive data structures do not have these properties, so new techniques must be developed. In this article, we describe an execution model for supporting programs that use pointer-based dynamic data structures. This model uses a simple mechanism for migrating a thread of control based on the layout of heap-allocated data and introduces parallelism using a technique based on futures and lazy task creation. We intend to exploit this execution model using compiler analyses and automatic parallelization techniques. We have implemented a prototype system, which we call Olden, that runs on the Intel iPSC/860 and the Thinking Machines CM-5. We discuss our implementation and report on experiments with five benchmarks.


IEEE Transactions on Parallel and Distributed Systems | 1990

Parallelizing programs with recursive data structures

Laurie J. Hendren; Alexandru Nicolau

A study is made of the problem of estimating interference in an imperative language with dynamic data structures. The authors focus on developing efficient and implementable methods for recursive data structures. In particular, they present interference analysis tools and parallelization techniques for imperative programs that contain dynamically updatable trees and directed acyclic graphs. The analysis methods are based on a regular-expression-like representation of the relationship between accessible nodes in the data structure. They authors have implemented their analysis, and they present some concrete examples that have been processed by this system. >

Collaboration


Dive into the Laurie J. Hendren's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Patrick Lam

University of Waterloo

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Eric Bodden

Technische Universität Darmstadt

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge