Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Paul N. Hilfinger is active.

Publication


Featured researches published by Paul N. Hilfinger.


Concurrency and Computation: Practice and Experience | 1998

Titanium: a high‐performance Java dialect

Katherine A. Yelick; Luigi Semenzato; Geoff Pike; Carleton Miyamoto; Ben Liblit; Arvind Krishnamurthy; Paul N. Hilfinger; Susan L. Graham; Phillip Colella; Alex Aiken

Titanium is a language and system for high-performance parallel scientific computing. Titanium uses Java as its base, thereby leveraging the advantages of that language and allowing us to focus attention on parallel computing issues. The main additions to Java are immutable classes, multidimensional arrays, an explicitly parallel SPMD model of computation with a global address space, and zone-based memory management. We discuss these features and our design approach, and report progress on the development of Titanium, including our current driving application: a three-dimensional adaptive mesh refinement parallel Poisson solver.


programming language design and implementation | 1988

Detecting conflicts between structure accesses

James R. Larus; Paul N. Hilfinger

Two references to a record structure conflict if they access the same field and at least one modifies the location. Because structures can be connected by pointers, deciding if two statements conflict requires knowledge of the possible aliases for the locations that they access. This paper describes a dataflow computation that produces a conservative description of the aliases visible at any point in a program. The data structure that records aliases is an alias graph. It also labels instances of structures so that the objects referenced at different points in a program can be compared. This paper shows how alias graphs can be used to detect potential conflicts.


IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems | 1991

An integrated CAD system for algorithm-specific IC design

C. B. Shung; Rajeev Jain; K. Rimey; E. Wang; Mani B. Srivastava; B. C. Richards; E. Lettang; S. Khalid Azim; L. Thon; Paul N. Hilfinger; J.M. Rabaey; Robert W. Brodersen

LAGER is an integrated computer-aided design system for algorithm-specific integrated circuit design, targeted at applications such as speech processing, image processing, telecommunications, and robot control. LAGER provides user interfaces at behavioral, structural, and physical levels and allows easy integration of novel CAD tools. LAGER consists of a behavioral mapper and a silicon assembler. The behavioral mapper maps the behavior onto a parameterized structure to produce microcode and parameter values. The silicon assembler then translates the filled-out structural description into a physical layout, and, with the aid of simulation tools, the user can fine tune the data path by iterating this process. The silicon assembler can also be used without the behavioral mapper for high-sample-rate applications. A number of algorithm-specific ICs designed with LAGER have been fabricated and tested, and as examples, a robot arm controller chip and a real-time image segmentation chip are described. >


design automation conference | 1998

Design and specification of embedded systems in Java using successive, formal refinement

James Shin Young; Josh MacDonald; Michael Shilman; Abdallah Tabbara; Paul N. Hilfinger; A.R. Newton

Successive, formal refinement is a new approach for specification of embedded systems using a general-purpose programming language. Systems are formally modeled as abstractable synchronous reactive systems, and Java is used as the design input language. A policy of use is applied to Java, in the form of language usage restrictions and class-library extensions to ensure consistency with the formal model. A process of incremental, user-guided program transformation is used to refine a Java program until it is consistent with the policy of use. The final product is a system specification possessing the properties of the formal model, including deterministic behavior, bounded memory usage and bounded execution time. This approach allows systems design to begin with the flexibility of a general-purpose language, followed by gradual refinement into a more restricted form necessary for specification.


international symposium on computer architecture | 1986

Evaluation of the SPUR Lisp architecture

George S. Taylor; Paul N. Hilfinger; James R. Larus; David A. Patterson; Benjamin G. Zorn

The SPUR microprocessor has a 40-bit tagged architecture designed to improve its performance for Lisp programs. Although SPUR includes just a small set of enhancements to the Berkeley RISC-II architecture, simulation results show that with a 150-ns cycle time SPUR will run Common Lisp programs at least as fast as a Symbolies 3600 or a DEC VAX 8600. This paper explains SPURs instruction set architecture and provides measurements of how certain components of the architecture perform.


ACM Transactions on Programming Languages and Systems | 1988

An Ada package for dimensional analysis

Paul N. Hilfinger

This paper illustrates the use of Adas abstraction facilities—notably, operator overloading and type parameterization—to define an oft-requested feature: a way to attribute units of measure to variables and values. The definition given allows the programmer to specify units of measure for variables, constants, and parameters; checks uses of these entities for dimensional consistency; allows arithmetic between them, where legal; and provides scale conversions between commensurate units. It is not constrained to a particular system of measurement (such as the metric or English systems). Although the definition is in standard Ada and requires nothing special of the compiler, certain reasonable design choices in the compiler, discussed here at some length, can make its implementation particularly efficient.


compiler construction | 1986

Register allocation in the SPUR Lisp compiler

James R. Larus; Paul N. Hilfinger

Register allocation is an important component of most compilers, particularly those for RISC machines. The SPUR Lisp compiler uses a sophisticated, graph-coloring algorithm developed by Fredrick Chow [Chow84]. This paper describes the algorithm and the techniques used to implement it efficiently and evaluates its performance on several large programs. The allocator successfully assigned most temporaries and local variables to registers in a wide variety of functions. Its execution cost is moderate.


conference on high performance computing (supercomputing) | 2002

Better Tiling and Array Contraction for Compiling Scientific Programs

Geoff Pike; Paul N. Hilfinger

Scientific programs often include multiple loops over the same data; interleaving parts of different loops may greatly improve performance. We exploit this in a compiler for Titanium, a dialect of Java. Our compiler combines reordering optimizations such as loop fusion and tiling with storage optimizations such as array contraction (eliminating or reducing the size of temporary arrays). The programmers we have in mind are willing to spend some time tuning their code and their compiler parameters. Given that, and the difficulty in statically selecting parameters such as tile sizes, it makes sense to provide automatic parameter searching alongside the compiler. Our strategy is to optimize aggressively but to expose the compiler’s decisions to external control. We double or triple the performance of Gauss-Seidel relaxation and multi-grid (versus an optimizing compiler without tiling and array contraction), and we argue that ours is the best compiler for that kind of program.


software configuration management workshop | 1998

PRCS: The Project Revision Control System

Josh MacDonald; Paul N. Hilfinger; Luigi Semenzato

PRCS is an attempt to provide a version-control system for collections of files with a simple operational model, a clean user interface, and high performance. PRCS is characterized by the use of project description files to input most commands, instead of a point-and-click or a line-oriented interface. It employs optimistic concurrency control and encourages operations on the entire project rather than individual files. Although its current implementation uses RCS in the back-end, the interface completely hides its presence. PRCS is free. This paper describes the advantages and disadvantages of our approach, and discusses implementation issues.


ieee international conference on high performance computing data and analytics | 2007

Parallel Languages and Compilers: Perspective From the Titanium Experience

Katherine A. Yelick; Paul N. Hilfinger; Susan L. Graham; Dan Bonachea; Jimmy Su; Amir Kamil; Kaushik Datta; Phillip Colella; Tong Wen

We describe the rationale behind the design of key features of Titanium—an explicitly parallel dialect of Java for high-performance scientific programming—and our experiences in building applications with the language. Specifically, we address Titaniums partitioned global address space model, single program multiple data parallelism support, multi-dimensional arrays and array-index calculus, memory management, immutable classes (class-like types that are value types rather than reference types), operator overloading, and generic programming. We provide an overview of the Titanium compiler implementation, covering various parallel analyses and optimizations, Titanium runtime technology and the GASNet network communication layer. We summarize results and lessons learned from implementing the NAS parallel benchmarks, elliptic and hyperbolic solvers using adaptive mesh refinement, and several applications of the immersed boundary method.

Collaboration


Dive into the Paul N. Hilfinger's collaboration.

Top Co-Authors

Avatar

James R. Larus

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Kinson Ho

University of California

View shared research outputs
Top Co-Authors

Avatar

Katherine A. Yelick

Lawrence Berkeley National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Phillip Colella

Lawrence Berkeley National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Geoff Pike

University of California

View shared research outputs
Top Co-Authors

Avatar

Ben Liblit

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Dan Bonachea

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge