Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Susan Horwitz is active.

Publication


Featured researches published by Susan Horwitz.


ACM Transactions on Programming Languages and Systems | 1990

Interprocedural slicing using dependence graphs

Susan Horwitz; Thomas W. Reps; David W. Binkley

The notion of a <italic>program slice</italic>, originally introduced by Mark Weiser, is useful in program debugging, automatic parallelization, and program integration. A slice of a program is taken with respect to a program point <italic>p</italic> and a variable <italic>x</italic>; the slice consists of all statements of the program that might affect the value of <italic>x</italic> at point <italic>p</italic>. This paper concerns the problem of interprocedural slicing—generating a slice of an entire program, where the slice crosses the boundaries of procedure calls. To solve this problem, we introduce a new kind of graph to represent programs, called a <italic>system dependence graph</italic>, which extends previous dependence representations to incorporate collections of procedures (with procedure calls) rather than just monolithic programs. Our main result is an algorithm for interprocedural slicing that uses the new representation. (It should be noted that our work concerns a somewhat restricted kind of slice: rather than permitting a program to be sliced with respect to program point <italic>p</italic> and an <italic>arbitrary</italic> variable, a slice must be taken with respect to a variable that is <italic>defined</italic> or <italic>used</italic> at <italic>p</italic>.) The chief difficulty in interprocedural slicing is correctly accounting for the calling context of a called procedure. To handle this problem, system dependence graphs include some data dependence edges that represent <italic>transitive</italic> dependences due to the effects of procedure calls, in addition to the conventional direct-dependence edges. These edges are constructed with the aid of an auxiliary structure that represents calling and parameter-linkage relationships. This structure takes the form of an attribute grammar. The step of computing the required transitive-dependence edges is reduced to the construction of the subordinate characteristic graphs for the grammars nonterminals.


symposium on principles of programming languages | 1995

Precise interprocedural dataflow analysis via graph reachability

Thomas W. Reps; Susan Horwitz; Mooly Sagiv

The paper shows how a large class of interprocedural dataflow-analysis problems can be solved precisely in polynomial time by transforming them into a special kind of graph-reachability problem. The only restrictions are that the set of dataflow facts must be a finite set, and that the dataflow functions must distribute over the confluence operator (either union or intersection). This class of probable problems includes—but is not limited to—the classical separable problems (also known as “gen/kill” or “bit-vector” problems)—e.g., reaching definitions, available expressions, and live variables. In addition, the class of problems that our techniques handle includes many non-separable problems, including truly-live variables, copy constant propagation, and possibly-uninitialized variables. Results are reported from a preliminary experimental study of C programs (for the problem of finding possibly-uninitialized variables).


ACM Transactions on Programming Languages and Systems | 1989

Integrating noninterfering versions of programs

Susan Horwitz; Jan F. Prins; Thomas W. Reps

The need to integrate several versions of a program into a common one arises frequently, but it is a tedious and time consuming task to integrate programs by hand. To date, the only available tools for assisting with program integration are variants of <italic>text-based</italic> differential file comparators; these are of limited utility because one has no guarantees about how the program that is the product of an integration behaves compared to the programs that were integrated. This paper concerns the design of a <italic>semantics-based</italic> tool for automatically integrating program versions. The main contribution of the paper is an algorithm that takes as input three programs <italic>A</italic>, <italic>B</italic>, and <italic>Base</italic>, where <italic>A</italic> and <italic>B</italic> are two variants of <italic>Base</italic>. Whenever the changes made to <italic>Base</italic> to create <italic>A</italic> and <italic>B</italic> do not “interfere” (in a sense defined in the paper), the algorithm produces a program <italic>M</italic> that integrates <italic>A</italic> and <italic>B</italic>. The algorithm is predicated on the assumption that differences in the <italic>behavior</italic> of the variant programs from that of <italic>Base</italic>, rather than differences in the <italic>text</italic>, are significant and must be preserved in <italic>M</italic>. Although it is undecidable whether a program modification actually leads to such a difference, it is possible to determine a safe approximation by comparing each of the variants with <italic>Base</italic>. To determine this information, the integration algorithm employs a program representation that is similar (although not identical) to the <italic>dependence graphs</italic> that have been used previously in vectorizing and parallelizing compilers. The algorithm also makes use of the notion of a <italic>program slice</italic> to find just those statements of a program that determine the values of potentially affected variables. The program-integration problem has not been formalized previously. It should be noted, however, that the integration problem examined here is a greatly simplified one; in particular, we assume that expressions contain only scalar variables and constants, and that the only statements used in programs are assignment statements, conditional statements, and while-loops.


static analysis symposium | 2001

Using Slicing to Identify Duplication in Source Code

Raghavan Komondoor; Susan Horwitz

Programs often have a lot of duplicated code, which makes both understanding and maintenance more difficult. This problem can be alleviated by detecting duplicated code, extracting it into a separate new procedure, and replacing all the clones (the instances of the duplicated code) by calls to the new procedure. This paper describes the design and initial implementation of a tool that finds clones and displays them to the programmer. The novel aspect of our approach is the use of program dependence graphs (PDGs) and program slicing to find isomorphic PDG subgraphs that represent clones. The key benefits of this approach are that our tool can find non-contiguous clones (clones whose components do not occur as contiguous text in the program), clones in which matching statements have been reordered, and clones that are intertwined with each other. Furthermore, the clones that are found are likely to be meaningful computations, and thus good candidates for extraction.


programming language design and implementation | 1989

Dependence analysis for pointer variables

Susan Horwitz; Phil Pfeiffer; Thomas W. Reps

Our concern is how to determine data dependencies between program constructs in programming languages with pointer variables. We are particularly interested in computing data dependencies for languages that manipulate heap-allocated storage, such as Lisp and Pascal. We have defined a family of algorithms that compute safe approximations to the flow, output, and anti-dependencies of a program written in such a language. Our algorithms account for destructive updates to fields of a structure and thus are not limited to the cases where all structures are trees or acyclic graphs; they are applicable to programs that build cyclic structures. Our technique extends an analysis method described by Jones and Muchnick that determines an approximation to the actual layouts of memory that can arise at each program point during execution. We extend the domain used in their abstract interpretation so that the (abstract) memory locations are labeled by the program points that set their contents. Data dependencies are then determined from these memory layouts according to the component labels found along the access paths that must be traversed during execution to evaluate the programs statements and predicates. For structured programming constructs, the technique can be extended to distinguish between loop-carried and loop-independent dependencies, as well as to determine lower bounds on minimum distances for loop-carried dependencies.


international conference on software engineering | 1992

The use of program dependence graphs in software engineering

Susan Horwitz; Thomas W. Reps

This paper describes a language-independent program representation-the program dependence graph-and discusses how program dependence graphs, together with operations such as program slicing, can provide the basis for powerful programmmg tools that address important software-engineering problems, such as understanding what an existing program does and how it works, understanding the differences between several versions of a program, and creating new programs by combining pieces of old pro- grams. The paper primarily surveys work in this area that has been carried out at the University of Wisconsin during the past five years.


symposium on principles of programming languages | 1993

Incremental program testing using program dependence graphs

Samuel Bates; Susan Horwitz

Program dependence graphs have been proposed for use in optimizing, vectorizing, and parallelizing compilers, and for program integration. This paper proposes their use as the basis for incremental program testing when using test data adequacy criteria. Test data adequacy is commonly used to provide some confidence that a particular test suite does a reasonable job of testing a program. Incremental program testing using test data adequacy criteria addresses the problem of testing a modified program given an adequate test suite for the original program. Ideally, one would like to create an adequate test suite for the modified program that reuses as many files from the old test suite as possible. Furthermore, one would like to know, for every file that is in both the old and the new test suites, whether the program components exercised by that file have been affected by the program modification; if no components have been affected, then it is not necessary to rerun the program using that file. In this paper we define adequacy criteria based on the program dependence graph, and propose techniques based on program slicing to identify components of the modified program that can be tested using files from the old test suite, and components that have been affected by the modification. This information can be used to reduce the time required to create new test files, and to avoid unproductive retesting of unaffected components. Although exact identification of the components listed above is, in general, undecidable, we demonstrate that our techniques provide safe approximations.


programming language design and implementation | 1990

Identifying the semantic and textual differences between two versions of a program

Susan Horwitz

Text-based file comparators (<italic>e.g.</italic>, the Unix utility <italic>diff</italic>), are very general tools that can be applied to arbitrary files. However, using such tools to compare <italic>programs</italic> can be unsatisfactory because their <italic>only</italic> notion of change is based on program <italic>text</italic> rather than program <italic>behavior</italic>. This paper describes a technique for comparing two versions of a program, determining which program components represents changes, and classifying each changed component as representing either a <italic>semantic</italic> or a <italic>textual</italic> change.


foundations of software engineering | 1994

Speeding up slicing

Thomas W. Reps; Susan Horwitz; Mooly Sagiv; Genevieve Rosay

Program slicing is a fundamental operation for many software engineering tools. Currently, the most efficient algorithm for interprocedural slicing is one that uses a program representation called the system dependence graph. This paper defines a new algorithm for slicing with system dependence graphs that is asymptotically faster than the previous one. A preliminary experimental study indicates that the new algorithm is also significantly faster in practice, providing roughly a 6-fold speedup on examples of 348 to 757 lines.


Theoretical Computer Science | 1996

Precise interprocedural dataflow analysis with applications to constant propagation

Mooly Sagiv; Thomas W. Reps; Susan Horwitz

This paper concerns interprocedural dataflow-analysis problems in which the dataflow information at a program point is represented by an environment (i.e., a mapping from symbols to values), and the effect of a program operation is represented by a distributive environment transformer. We present an efficient dynamic-programming algorithm that produces precise solutions.

Collaboration


Dive into the Susan Horwitz's collaboration.

Top Co-Authors

Avatar

Thomas W. Reps

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Suan Hsi Yong

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Raghavan Komondoor

Indian Institute of Science

View shared research outputs
Top Co-Authors

Avatar

David W. Binkley

Loyola University Maryland

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jan F. Prins

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Marc Shapiro

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Min Aung

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Rich Joiner

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Wuu Yang

University of Wisconsin-Madison

View shared research outputs
Researchain Logo
Decentralizing Knowledge