Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jeffrey Dean is active.

Publication


Featured researches published by Jeffrey Dean.


conference on object-oriented programming systems, languages, and applications | 1997

Call graph construction in object-oriented languages

David Grove; Greg DeFouw; Jeffrey Dean; Craig Chambers

Interprocedural analyses enable optimizing compilers to more precisely model the effects of non-inlined procedure calls, potentially resulting in substantial increases in application performance. Applying interprocedural analysis to programs written in object-oriented or functional languages is complicated by the difficulty of constructing an accurate program call graph. This paper presents a parameterized algorithmic framework for call graph construction in the presence of message sends and/or first class functions. We use this framework to describe and to implement a number of well-known and new algorithms. We then empirically assess these algorithms by applying them to a suite of medium-sized programs written in Cecil and Java, reporting on the relative cost of the analyses, the relative precision of the constructed call graphs, and the impact of this precision on the effectiveness of a number of interprocedural optimizations.


programming language design and implementation | 1995

Selective specialization for object-oriented languages

Jeffrey Dean; Craig Chambers; David Grove

Dynamic dispatching is a major source of run-time overhead in object-oriented languages, due both to the direct cost of method lookup and to the indirect effect of preventing other optimizations. To reduce this overhead, optimizing compilers for object-oriented languages analyze the classes of objects stored in program variables, with the goal of bounding the possible classes of message receivers enough so that the compiler can uniquely determine the target of a message send at compile time and replace the message send with a direct procedure call. Specialization is one important technique for improving the precision of this static class information: by compiling multiple versions of a method, each applicable to a subset of the possible argument classes of the method, more precise static information about the classes of the methods arguments is obtained. Previous specialization strategies have not been selective about where this technique is applied, and therefore tended to significantly increase compile time and code space usage, particularly for large applications. In this paper, we present a more general framework for specialization in object-oriented languages and describe a goal directed specialization algorithm that makes selective decisions to apply specialization to those cases where it provides the highest benefit. Our results show that our algorithm improves the performance of a group of sizeable programs by 65% to 275% while increasing compiled code space requirements by only 4% to 10%. Moreover, when compared to the previous state-of-the-art specialization scheme, our algorithm improves performance by 11% to 67% while simultaneously reducing code space requirements by 65% to 73%.


conference on object oriented programming systems languages and applications | 1995

Profile-guided receiver class prediction

David Grove; Jeffrey Dean; Charles Garrett; Craig Chambers

The use of dynamically-dispatched procedure calls is a key mechanism for writing extensible and flexible code in object-oriented languages. Unfortunately, dynamic dispatching imposes a runtime performance penalty. Some recent implementations of pure object-oriented languages have utilized profile-guided receiver class prediction to reduce this performance penalty, and some researchers have argued for applying receiver class prediction in hybrid languages like C++. We performed a detailed examination of the dynamic profiles of eight large object-oriented applications written in C++ and Cecil, determining that the receiver class distributions are strongly peaked and stable across both inputs and program versions through time. We describe techniques for gathering and manipulating profile information at varying degrees of precision, particularly in the presence of optimizations such as inlining. Our implementation of profile-guided receiver class prediction improves the performance of large Cecil applications by more than a factor of two over solely static optimizations.


international conference on functional programming | 1994

Towards better inlining decisions using inlining trials

Jeffrey Dean; Craig Chambers

Inlining trials are a general mechanism for making better automatic decisions about whether a routine is profitable to inline. Unlike standard source-level inlining heuristics, an inlining trial captures the effects of optimizations applied to the body of the inlined routine when calculating the costs and benefits of inlining. The results of inlining trials are stored in a persistent database to be reused when making future inlining decisions at similar call sites. Type group analysis can determine the amount of available static information exploited during compilation, and the results of analyzing the compilation of an inlined routine help decide when a future call site would lead to substantially the same generated code as a given inlining trial. We have implemented inlining trials and type group analysis in an optimizing compiler for SELF, and by making wiser inlining decisions we were able to cut compilation time and compiled code space with virtually no loss of execution speed. We believe that inlining trials and type group analysis could be applied effectively to many high-level languages where procedural or functional abstraction is used heavily.


international conference on software engineering | 1995

A framework for selective recompilation in the presence of complex intermodule dependencies

Craig Chambers; Jeffrey Dean; David Grove

Compilers and other programming environment tools derive information from the source code of programs; derived information includes compiled code, interprocedurrd summary information, and call graph views. If the source program changes, the derived information needs to be updated. We present a simple framework for maintaining interrnodule dependencies, embodying different tradeoffs in terms of space usage, speed of processing, and selectivity of invalidation, that eases the implementation of incremental update of derived information. Our framework augments a directed acyclic graph representation of dependencies with factoring nodes (to save space) and filtering nodes (to increase selectivity), and it includes an algorithm for efficient invalidation processing. We show how several schemes for selective recompilation, such as smart recompilation, filter sets for interprocedural summary information, and dependencies for whole-program optimization of object-oriented languages, map naturally onto our framework. For this latter application, by exploiting the facilities of our framework, we are able to reduce the number of lines of source code recompiled by a factor of seven over a header file-based scheme, and by a factor of two over the previous state-of-the-art selective dependency mechanism without consuming additional space.


european conference on object oriented programming | 1995

Optimization of Object-Oriented Programs Using Static Class Hierarchy Analysis

Jeffrey Dean; David Grove; Craig Chambers


conference on object-oriented programming systems, languages, and applications | 1996

Vortex: an optimizing compiler for object-oriented languages

Jeffrey Dean; Greg DeFouw; David Grove; Vassily Litvinov; Craig Chambers


Archive | 1996

Whole-program optimization of object-oriented languages

Jeffrey Dean; Craig Chambers


Archive | 1994

Measurement and Application of Dynamic Receiver Class Distributions

Charles Garrett; Jeffrey Dean; David Grove; Craig Chambers


partial evaluation and semantic-based program manipulation | 1994

Identifying Profitable Specialization in Object-Oriented Languages

Jeffrey Dean; Craig Chambers; David Grove

Collaboration


Dive into the Jeffrey Dean's collaboration.

Top Co-Authors

Avatar

Craig Chambers

University of Washington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Greg DeFouw

University of Washington

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge