Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Richard Johnson is active.

Publication


Featured researches published by Richard Johnson.


programming language design and implementation | 1994

The program structure tree: computing control regions in linear time

Richard Johnson; David Pearson; Keshav Pingali

In this paper, we describe the program structure tree (PST), a hierarchical representation of program structure based on single entry single exit (SESE) regions of the control flow graph. We give a linear-time algorithm for finding SESE regions and for building the PST of arbitrary control flow graphs (including irreducible ones). Next, we establish a connection between SESE regions and control dependence equivalence classes, and show how to use the algorithm to find control regions in linear time. Finally, we discuss some applications of the PST. Many control flow algorithms, such as construction of Static Single Assignment form, can be speeded up by applying the algorithms in a divide-and-conquer style to each SESE region on its own. The PST is also used to speed up data flow analysis by exploiting “sparsity”. Experimental results from the Perfect Club and SPEC89 benchmarks confirm that the PST approach finds and exploits program structure.


programming language design and implementation | 1993

Dependence-based program analysis

Richard Johnson; Keshav Pingali

Program analysis and optimization can be speeded up through the use of the dependence flow graph (DFG), a representation of program dependences which generalizes def-use chains and static single assignment (SSA) form. In this paper, we give a simple graph-theoretic description of the DFG and show how the DFG for a program can be constructed in O(EV) time. We then show how forward and backward dataflow analyses can be performed efficiently on the DFG, using constant propagation and elimination of partial redundancies as examples. These analyses can be framed as solutions of dataflow equations in the DFG. Our construction algorithm is of independent interest because it can be used to construct a programs control dependence graph in O(E) time and its SSA representation in O(EV) time, which are improvements over existing algorithms.


symposium on principles of programming languages | 1991

Dependence flow graphs: an algebraic approach to program dependencies

Keshav Pingali; Micah Beck; Richard Johnson; Mayan Moudgill; Paul Stodghill

The topic of intermediate languages for optimizing and parallelizing compilers has received much attention lately. In this paper, we argue that any good representation must have two crucial properties: first, the representation of a program must be a data structure that can be rapidly traversed to determine dependence information; second, the representation must be a program in its own right, with a parallel, local, model of execution. In this paper, we illustrate the importance of these points by examining algorithms for standard optimization-global constant propagation. We discuss the problems in working with current representations. Then, we propose a novel representation called the dependence flow graph which has each of the properties mentioned above. In this representation, dependencies are part of the computational mode, in that there is an algebra of operators over dependencies. We show that this representation leads to a simple algorithm, based on abstract interpretation, for solving the constant propagation problem. Our algorithm is simpler than, and as fast as, the best known algorithms for the problem. An interesting feature of our representation is that it naturally incorporates the best aspects of many other representations, including continuation-passing style, data and program dependence graphs, static single assignment form and dataflow program graphs.


Journal of Parallel and Distributed Computing | 1991

From control flow to dataflow

Micah Beck; Richard Johnson; Keshav Pingali

Abstract Are imperative languages tied inseparably to the von Neumann model or can they be implemented in some natural way on data-flow architectures? In this paper, we show how imperative language programs can be translated into dataflow graphs and executed on a dataflow machine like Monsoon. This translation can exploit both fine-grain and coarse-grain parallelism in imperative language programs. More importantly, we establish a close connection between our work and current research in the imperative languages community on data dependences, control dependences, program dependence graphs, and static single assignment form. These results suggest that dataflow graphs can serve as an executable intermediate representation in parallelizing compilers.


programming language design and implementation | 1999

Control CPR: a branch height reduction optimization for EPIC architectures

Michael S. Schlansker; Scott A. Mahlke; Richard Johnson

The challenge of exploiting high degrees of instruction-level parallelism is often hampered by frequent branching. Both exposed branch latency and low branch throughput can restrict parallelism. Control critical path reduction (control CPR) is a compilation technique to address these problems. Control CPR can reduce the dependence height of critical paths through branch operations as well as decrease the number of executed branches. In this paper, we present an approach to control CPR that recognizes sequences of branches using profiling statistics. The control CPR transformation is applied to the predominant path through this sequence. Our approach, its implementation, and experimental results are presented. This work demonstrates that control CPR enhances instruction-level parallelism for a variety of application programs and improves their performance across a range of processors.


languages and compilers for parallel computing | 1991

An Executable Representation of Distance and Direction

Richard Johnson; Wei Li; Keshav Pingali

The dependence flow graph is a novel intermediate representation for optimizing and parallelizing compilers that can be viewed as an executable representation of program dependences. The execution model, called dependencedriven execution, is a generalization of the tagged-token dataflow model that permits imperative updates to memory. The dependence flow graph subsumes other representations such as continuation-passing style [12], data dependence graphs [13], and static single assignment form [8]. In this paper, we show how dependence distance and direction information can be represented in this model using dependence operators. From a functional perspective, these operators can be viewed as functions on streams [4].


international asia conference on informatics in control, automation and robotics | 1997

Achieving high levels of instruction-level parallelism with reduced hardware complexity

Michael S. Schlansker; B. Ramakrishna Rau; Scott A. Mahlke; Vinod Kathail; Richard Johnson; Sadun Anik; Santosh G. Abraham


Archive | 1995

Efficient program analysis using dependence flow graphs

Richard Johnson


Archive | 2000

Automatic design of VLIW and EPIC instruction formats

Shail Aditya; B. Ramakrishna Rau; Richard Johnson


Archive | 1993

Finding Regions Fast: Single Entry Single Exit and Control Regions in Linear Time

Richard Johnson; David Pearson; Keshav Pingali

Collaboration


Dive into the Richard Johnson's collaboration.

Top Co-Authors

Avatar

Keshav Pingali

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Paul Stodghill

United States Department of Agriculture

View shared research outputs
Researchain Logo
Decentralizing Knowledge