Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Thomas W. Reps is active.

Publication


Featured researches published by Thomas W. Reps.


ACM Transactions on Programming Languages and Systems | 1990

Interprocedural slicing using dependence graphs

Susan Horwitz; Thomas W. Reps; David W. Binkley

The notion of a <italic>program slice</italic>, originally introduced by Mark Weiser, is useful in program debugging, automatic parallelization, and program integration. A slice of a program is taken with respect to a program point <italic>p</italic> and a variable <italic>x</italic>; the slice consists of all statements of the program that might affect the value of <italic>x</italic> at point <italic>p</italic>. This paper concerns the problem of interprocedural slicing—generating a slice of an entire program, where the slice crosses the boundaries of procedure calls. To solve this problem, we introduce a new kind of graph to represent programs, called a <italic>system dependence graph</italic>, which extends previous dependence representations to incorporate collections of procedures (with procedure calls) rather than just monolithic programs. Our main result is an algorithm for interprocedural slicing that uses the new representation. (It should be noted that our work concerns a somewhat restricted kind of slice: rather than permitting a program to be sliced with respect to program point <italic>p</italic> and an <italic>arbitrary</italic> variable, a slice must be taken with respect to a variable that is <italic>defined</italic> or <italic>used</italic> at <italic>p</italic>.) The chief difficulty in interprocedural slicing is correctly accounting for the calling context of a called procedure. To handle this problem, system dependence graphs include some data dependence edges that represent <italic>transitive</italic> dependences due to the effects of procedure calls, in addition to the conventional direct-dependence edges. These edges are constructed with the aid of an auxiliary structure that represents calling and parameter-linkage relationships. This structure takes the form of an attribute grammar. The step of computing the required transitive-dependence edges is reduced to the construction of the subordinate characteristic graphs for the grammars nonterminals.


symposium on principles of programming languages | 1995

Precise interprocedural dataflow analysis via graph reachability

Thomas W. Reps; Susan Horwitz; Mooly Sagiv

The paper shows how a large class of interprocedural dataflow-analysis problems can be solved precisely in polynomial time by transforming them into a special kind of graph-reachability problem. The only restrictions are that the set of dataflow facts must be a finite set, and that the dataflow functions must distribute over the confluence operator (either union or intersection). This class of probable problems includes—but is not limited to—the classical separable problems (also known as “gen/kill” or “bit-vector” problems)—e.g., reaching definitions, available expressions, and live variables. In addition, the class of problems that our techniques handle includes many non-separable problems, including truly-live variables, copy constant propagation, and possibly-uninitialized variables. Results are reported from a preliminary experimental study of C programs (for the problem of finding possibly-uninitialized variables).


Communications of The ACM | 1981

The Cornell program synthesizer: a syntax-directed programming environment

Tim Teitelbaum; Thomas W. Reps

Programs are not text; they are hierarchical compositions of computational structures and should be edited, executed, and debugged in an environment that consistently acknowledges and reinforces this viewpoint. The Cornell Program Synthesizer demands a structural perspective at all stages of program development. Its separate features are unified by a common foundation: a grammar for the programming language. Its full-screen derivation-tree editor and syntax-directed diagnostic interpreter combine to make the Synthesizer a powerful and responsive interactive programming tool.


ACM Transactions on Programming Languages and Systems | 1989

Integrating noninterfering versions of programs

Susan Horwitz; Jan F. Prins; Thomas W. Reps

The need to integrate several versions of a program into a common one arises frequently, but it is a tedious and time consuming task to integrate programs by hand. To date, the only available tools for assisting with program integration are variants of <italic>text-based</italic> differential file comparators; these are of limited utility because one has no guarantees about how the program that is the product of an integration behaves compared to the programs that were integrated. This paper concerns the design of a <italic>semantics-based</italic> tool for automatically integrating program versions. The main contribution of the paper is an algorithm that takes as input three programs <italic>A</italic>, <italic>B</italic>, and <italic>Base</italic>, where <italic>A</italic> and <italic>B</italic> are two variants of <italic>Base</italic>. Whenever the changes made to <italic>Base</italic> to create <italic>A</italic> and <italic>B</italic> do not “interfere” (in a sense defined in the paper), the algorithm produces a program <italic>M</italic> that integrates <italic>A</italic> and <italic>B</italic>. The algorithm is predicated on the assumption that differences in the <italic>behavior</italic> of the variant programs from that of <italic>Base</italic>, rather than differences in the <italic>text</italic>, are significant and must be preserved in <italic>M</italic>. Although it is undecidable whether a program modification actually leads to such a difference, it is possible to determine a safe approximation by comparing each of the variants with <italic>Base</italic>. To determine this information, the integration algorithm employs a program representation that is similar (although not identical) to the <italic>dependence graphs</italic> that have been used previously in vectorizing and parallelizing compilers. The algorithm also makes use of the notion of a <italic>program slice</italic> to find just those statements of a program that determine the values of potentially affected variables. The program-integration problem has not been formalized previously. It should be noted, however, that the integration problem examined here is a greatly simplified one; in particular, we assume that expressions contain only scalar variables and constants, and that the only statements used in programs are assignment statements, conditional statements, and while-loops.


software engineering symposium on practical software development environments | 1984

The synthesizer generator

Thomas W. Reps; Tim Teitelbaum

Programs are hierarchical compositions of formulae satisfying structural and extra-structural relationships. A program editor can use knowledge of such relationships to detect and provide immediate feedback about violations of them. The Synthesizer Generator is a tool for creating such editors from language descriptions. An editor designer specifies the desired relationships and the feedback to be given when they are violated, as well as a user interface; from the specification, the Synthesizer Generator creates a full-screen editor for manipulating programs in the language.


Journal of Algorithms | 1996

An Incremental Algorithm for a Generalization of the Shortest-Path Problem

G. Ramalingam; Thomas W. Reps

Thegrammar problem, a generalization of the single-source shortest-path problem introduced by D. E. Knuth (Inform. Process. Lett.6(1) (1977), 1?5) is to compute the minimum-cost derivation of a terminal string from each nonterminal of a given context-free grammar, with the cost of a derivation being suitably defined. This problem also subsumes the problem of finding optimal hyperpaths in directed hypergraphs (under varying optimization criteria) that has received attention recently. In this paper we present an incremental algorithm for a version of the grammar problem. As a special case of this algorithm we obtain an efficient incremental algorithm for the single-source shortest-path problem with positive edge lengths. The aspect of our work that distinguishes it from other work on the dynamic shortest-path problem is its ability to handle “multiple heterogeneous modifications”: between updates, the input graph is allowed to be restructured by an arbitrary mixture of edge insertions, edge deletions, and edge-length changes.


compiler construction | 2004

Analyzing Memory Accesses in x86 Executables

Gogul Balakrishnan; Thomas W. Reps

This paper concerns static-analysis algorithms for analyzing x86 executables. The aim of the work is to recover intermediate representations that are similar to those that can be created for a program written in a high-level language. Our goal is to perform this task for programs such as plugins, mobile code, worms, and virus-infected code. For such programs, symbol-table and debugging information is either entirely absent, or cannot be relied upon if present; hence, the technique described in the paper makes no use of symbol-table/debugging information. Instead, an analysis is carried out to recover information about the contents of memory locations and how they are manipulated by the executable.


programming language design and implementation | 1989

Dependence analysis for pointer variables

Susan Horwitz; Phil Pfeiffer; Thomas W. Reps

Our concern is how to determine data dependencies between program constructs in programming languages with pointer variables. We are particularly interested in computing data dependencies for languages that manipulate heap-allocated storage, such as Lisp and Pascal. We have defined a family of algorithms that compute safe approximations to the flow, output, and anti-dependencies of a program written in such a language. Our algorithms account for destructive updates to fields of a structure and thus are not limited to the cases where all structures are trees or acyclic graphs; they are applicable to programs that build cyclic structures. Our technique extends an analysis method described by Jones and Muchnick that determines an approximation to the actual layouts of memory that can arise at each program point during execution. We extend the domain used in their abstract interpretation so that the (abstract) memory locations are labeled by the program points that set their contents. Data dependencies are then determined from these memory layouts according to the component labels found along the access paths that must be traversed during execution to evaluate the programs statements and predicates. For structured programming constructs, the technique can be extended to distinguish between loop-carried and loop-independent dependencies, as well as to determine lower bounds on minimum distances for loop-carried dependencies.


ACM Transactions on Programming Languages and Systems | 1983

Incremental Context-Dependent Analysis for Language-Based Editors

Thomas W. Reps; Tim Teitelbaum; Alan J. Demers

Knowledge of a programming languages grammar allows language-based editors to enforce syntactic correctness at all times during development by restricting editing operations to legitimate modifications ot ~ the programs context-free derivation tree; however, not all language constraints can be enforced in this way because not all features can be described by the context-free formalism. Attribute grammars permit context-dependent language features to be expressed in a modular, declarative fashion and thus are a good basis for specifying language-based editors. Such editors represent programs as attributed trees, Which are modified by operations such as subtree pruning and grafting. Incremental analysis is performed by updating attribute values after every modification. This paper discusses how updating can be carried out and presents several algorithms for the task, including one that is asymptotically optimal in time.


international conference on software engineering | 1992

The use of program dependence graphs in software engineering

Susan Horwitz; Thomas W. Reps

This paper describes a language-independent program representation-the program dependence graph-and discusses how program dependence graphs, together with operations such as program slicing, can provide the basis for powerful programmmg tools that address important software-engineering problems, such as understanding what an existing program does and how it works, understanding the differences between several versions of a program, and creating new programs by combining pieces of old pro- grams. The paper primarily surveys work in this area that has been carried out at the University of Wisconsin during the past five years.

Collaboration


Dive into the Thomas W. Reps's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Somesh Jha

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Junghee Lim

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Nicholas Kidd

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alexey Loginov

University of Wisconsin-Madison

View shared research outputs
Researchain Logo
Decentralizing Knowledge