Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Suzette Person is active.

Publication


Featured researches published by Suzette Person.


international symposium on software testing and analysis | 2008

Combining unit-level symbolic execution and system-level concrete execution for testing nasa software

Corina S. Pǎsǎreanu; Peter C. Mehlitz; David H. Bushnell; Karen Gundy-Burlet; Michael R. Lowry; Suzette Person; Mark Pape

We describe an approach to testing complex safety critical software that combines unit-level symbolic execution and system-level concrete execution for generating test cases that satisfy user-specified testing criteria. We have developed Symbolic Java PathFinder, a symbolic execution framework that implements a non-standard bytecode interpreter on top of the Java PathFinder model checking tool. The framework propagates the symbolic information via attributes associated with the program data. Furthermore, we use two techniques that leverage system-level concrete program executions to gather information about a units input to improve the precision of the unit-level test case generation. We applied our approach to testing a prototype NASA flight software component. Our analysis helped discover a serious bug that resulted in design changes to the software. Although we give our presentation in the context of a NASA project, we believe that our work is relevant for other critical systems that require thorough testing.


foundations of software engineering | 2008

Differential symbolic execution

Suzette Person; Matthew B. Dwyer; Sebastian G. Elbaum; Corina S. Pǎsǎreanu

Detecting and characterizing the effects of software changes is a fundamental component of software maintenance. Version differencing information can be used to perform version merging, infer change characteristics, produce program documentation, and guide program re-validation. Existing techniques for characterizing code changes, however, are imprecise leading to unnecessary maintenance efforts. In this paper, we introduce a novel extension and application of symbolic execution techniques that computes a precise behavioral characterization of a program change. This technique, which we call differential symbolic execution (DSE), exploits the fact that program versions are largely similar to reduce cost and improve the quality of analysis results. We define the foundational concepts of DSE, describe cost-effective tool support for DSE, and illustrate its potential benefit through an exploratory study that considers version histories of two Java code bases.


programming language design and implementation | 2011

Directed incremental symbolic execution

Suzette Person; Guowei Yang; Neha Rungta; Sarfraz Khurshid

The last few years have seen a resurgence of interest in the use of symbolic execution -- a program analysis technique developed more than three decades ago to analyze program execution paths. Scaling symbolic execution and other path-sensitive analysis techniques to large systems remains challenging despite recent algorithmic and technological advances. An alternative to solving the problem of scalability is to reduce the scope of the analysis. One approach that is widely studied in the context of regression analysis is to analyze the differences between two related program versions. While such an approach is intuitive in theory, finding efficient and precise ways to identify program differences, and characterize their effects on how the program executes has proved challenging in practice. In this paper, we present Directed Incremental Symbolic Execution (DiSE), a novel technique for detecting and characterizing the effects of program changes. The novelty of DiSE is to combine the efficiencies of static analysis techniques to compute program difference information with the precision of symbolic execution to explore program execution paths and generate path conditions affected by the differences. DiSE is a complementary technique to other reduction or bounding techniques developed to improve symbolic execution. Furthermore, DiSE does not require analysis results to be carried forward as the software evolves -- only the source code for two related program versions is required. A case-study of our implementation of DiSE illustrates its effectiveness at detecting and characterizing the effects of program changes.


international conference on software engineering | 2007

Parallel Randomized State-Space Search

Matthew B. Dwyer; Sebastian G. Elbaum; Suzette Person; Rahul Purandare

Model checkers search the space of possible program behaviors to detect errors and to demonstrate their absence. Despite major advances in reduction and optimization techniques, state-space search can still become cost-prohibitive as program size and complexity increase. In this paper, we present a technique for dramatically improving the cost- effectiveness of state-space search techniques for error detection using parallelism. Our approach can be composed with all of the reduction and optimization techniques we are aware of to amplify their benefits. It was developed based on insights gained from performing a large empirical study of the cost-effectiveness of randomization techniques in state-space analysis. We explain those insights and our technique, and then show through a focused empirical study that our technique speeds up analysis by factors ranging from 2 to over 1000 as compared to traditional modes of state-space search, and does so with relatively small numbers of parallel processors.


foundations of software engineering | 2006

Controlling factors in evaluating path-sensitive error detection techniques

Matthew B. Dwyer; Suzette Person; Sebastian G. Elbaum

Recent advances in static program analysis have made it possible to detect errors in applications that have been thoroughly tested and are in wide-spread use. The ability to find errors that have eluded traditional validation methods is due to the development and combination of sophisticated algorithmic techniques that are embedded in the implementations of analysis tools. Evaluating new analysis techniques is typically performed by running an analysis tool on a collection of subject programs, perhaps enabling and disabling a given technique in different runs. While seemingly sensible, this approach runs the risk of attributing improvements in the cost-effectiveness of the analysis to the technique under consideration, when those improvements may actually be due to details of analysis tool implementations that are uncontrolled during evaluation.In this paper, we focus on the specific class of path-sensitive error detection techniques and identify several factors that can significantly influence the cost of analysis. We show, through careful empirical studies, that the influence of these factors is sufficiently large that, if left uncontrolled, they may lead researchers to improperly attribute improvements in analysis cost and effectiveness. We make several recommendations as to how the influence of these factors can be mitigated when evaluating techniques.


international conference on software maintenance | 2012

A change impact analysis to characterize evolving program behaviors

Neha Rungta; Suzette Person; Joshua Branchaud

Change impact analysis techniques estimate the potential effects of changes made to software. Directed Incremental Symbolic Execution (DiSE) is an intraprocedural technique for characterizing the impact of software changes on program behaviors. DiSE first estimates the impact of the changes on the source code using program slicing techniques, and then uses the impact sets to guide symbolic execution to generate path conditions that characterize impacted program behaviors. DiSE, however, cannot reason about the flow of impact between methods and will fail to generate path conditions for certain impacted program behaviors. In this work, we present iDiSE, an extension to DiSE that performs an interprocedural analysis. iDiSE combines static and dynamic calling context information to efficiently generate impacted program behaviors across calling contexts. Information about impacted program behaviors is useful for testing, verification, and debugging of evolving programs. We present a case-study of our implementation of the iDiSE algorithm to demonstrate its efficiency at computing impacted program behaviors. Traditional notions of coverage are insufficient for characterizing the testing efforts used to validate evolving program behaviors because they do not take into account the impact of changes to the code. In this work we present novel definitions of impacted coverage metrics that are useful for evaluating the testing effort required to test evolving programs. We then describe how the notions of impacted coverage can be used to configure techniques such as DiSE and iDiSE in order to support regression testing related tasks. We also discuss how DiSE and iDiSE can be configured for debugging; finding the root cause of errors introduced by changes made to the code. In our empirical evaluation we demonstrate that the configurations of DiSE and iDiSE can be used to support various software maintenance tasks.


international symposium on software testing and analysis | 2014

Feedback-driven dynamic invariant discovery

Lingming Zhang; Guowei Yang; Neha Rungta; Suzette Person; Sarfraz Khurshid

Program invariants can help software developers identify program properties that must be preserved as the software evolves, however, formulating correct invariants can be challenging. In this work, we introduce iDiscovery, a technique which leverages symbolic execution to improve the quality of dynamically discovered invariants computed by Daikon. Candidate invariants generated by Daikon are synthesized into assertions and instrumented onto the program. The instrumented code is executed symbolically to generate new test cases that are fed back to Daikon to help further refine the set of candidate invariants. This feedback loop is executed until a fix-point is reached. To mitigate the cost of symbolic execution, we present optimizations to prune the symbolic state space and to reduce the complexity of the generated path conditions. We also leverage recent advances in constraint solution reuse techniques to avoid computing results for the same constraints across iterations. Experimental results show that iDiscovery converges to a set of higher quality invariants compared to the initial set of candidate invariants in a small number of iterations.


international workshop on model checking software | 2013

Regression Verification Using Impact Summaries

John D. Backes; Suzette Person; Neha Rungta; Oksana Tkachuk

Regression verification techniques are used to prove equivalence of closely related program versions. Existing regression verification techniques leverage the similarities between program versions to help improve analysis scalability by using abstraction and decomposition techniques. These techniques are sound but not complete. In this work, we propose an alternative technique to improve scalability of regression verification that leverages change impact information to partition program execution behaviors. Program behaviors in each version are partitioned into (a) behaviors impacted by the changes and (b) behaviors not impacted (unimpacted) by the changes. Our approach uses a combination of static analysis and symbolic execution to generate summaries of program behaviors impacted by the differences. We show in this work that checking equivalence of behaviors in two program versions reduces to checking equivalence of just the impacted behaviors. We prove that our approach is both sound and complete for sequential programs, with respect to the depth bound of symbolic execution; furthermore, our approach can be used with existing approaches to better leverage the similarities between program versions and improve analysis scalability. We evaluate our technique on a set of sequential C artifacts and present preliminary results.


international conference on software engineering | 2014

Property differencing for incremental checking

Guowei Yang; Sarfraz Khurshid; Suzette Person; Neha Rungta

This paper introduces iProperty, a novel approach that facilitates incremental checking of programs based on a property differencing technique. Specifically, iProperty aims to reduce the cost of checking properties as they are initially developed and as they co-evolve with the program. The key novelty of iProperty is to compute the differences between the new and old versions of expected properties to reduce the number and size of the properties that need to be checked during the initial development of the properties. Furthermore, property differencing is used in synergy with program behavior differencing techniques to optimize common regression scenarios, such as detecting regression errors or checking feature additions for conformance to new expected properties. Experimental results in the context of symbolic execution of Java programs annotated with properties written as assertions show the effectiveness of iProperty in utilizing change information to enable more efficient checking.


automated software engineering | 2013

Detecting and characterizing semantic inconsistencies in ported code

Baishakhi Ray; Miryung Kim; Suzette Person; Neha Rungta

Adding similar features and bug fixes often requires porting program patches from reference implementations and adapting them to target implementations. Porting errors may result from faulty adaptations or inconsistent updates. This paper investigates (1) the types of porting errors found in practice, and (2) how to detect and characterize potential porting errors. Analyzing version histories, we define five categories of porting errors, including incorrect control- and data-flow, code redundancy, inconsistent identifier renamings, etc. Leveraging this categorization, we design a static control- and data-dependence analysis technique, SPA, to detect and characterize porting inconsistencies. Our evaluation on code from four open-source projects shows that SPA can detect porting inconsistencies with 65% to 73% precision and 90% recall, and identify inconsistency types with 58% to 63% precision and 92% to 100% recall. In a comparison with two existing error detection tools, SPA improves precision by 14 to 17 percentage points.

Collaboration


Dive into the Suzette Person's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Matthew B. Dwyer

University of Nebraska–Lincoln

View shared research outputs
Top Co-Authors

Avatar

Leen Kiat Soh

University of Nebraska–Lincoln

View shared research outputs
Top Co-Authors

Avatar

Sebastian G. Elbaum

University of Nebraska–Lincoln

View shared research outputs
Top Co-Authors

Avatar

Ashok Samal

University of Nebraska–Lincoln

View shared research outputs
Top Co-Authors

Avatar

Guowei Yang

Texas State University

View shared research outputs
Top Co-Authors

Avatar

Gwen Nugent

University of Nebraska–Lincoln

View shared research outputs
Top Co-Authors

Avatar

Jeff Lang

University of Nebraska–Lincoln

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sarfraz Khurshid

University of Texas at Austin

View shared research outputs
Researchain Logo
Decentralizing Knowledge