Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Corina S. Pasareanu is active.

Publication


Featured researches published by Corina S. Pasareanu.


international conference on software engineering | 2000

Bandera: extracting finite-state models from Java source code

James C. Corbett; Matthew B. Dwyer; John Hatcliff; Shawn Laubach; Corina S. Pasareanu; Robby; Hongjun Zheng

Despite emerging tool support for assertion-checking and testing of object-oriented programs, providing convincing evidence of program correctness remains a difficult challenge. This is especially true for multi-threaded programs. Techniques for reasoning about finite-state systems have been developing rapidly over the past decade and have the potential to form the basis of powerful software validation technologies. We have developed the Bandera toolset to harness the power of existing model checking tools to apply them to reason about correctness requirements of Java programs. Bandera provides tool support for defining and managing collections of requirements for a program, for extracting compact finite-state models of the program to enable tractable analysis, and for displaying analysis results to the user through a debugger-like interface. This paper describes and illustrates the use of Banderas source-level user interface for model checking Java programs.Finite-state verification techniques, such as model checking, have shown promise as a cost-effective means for finding defects in hardware designs. To date, the application of these techniques to software has been hindered by several obstacles. Chief among these is the problem of constructing a finite-state model that approximates the executable behavior of the software system of interest. Current best-practice involves hand-construction of models which is expensive (prohibitive for all but the smallest systems), prone to errors (which can result in misleading verification results), and difficult to optimize (which is necessary to combat the exponential complexity of verification algorithms). In this paper, we describe an integrated collection of program analysis and transformation components, called Bandera, that enables the automatic extraction of safe, compact finite-state models from program source code. Bandera takes as input Java source code and generates a program model in the input language of one of several existing verification tools; Bandera also maps verifier outputs back to the original source code. We discuss the major components of Bandera and give an overview of how it can be used to model check correctness properties of Java programs.


automated software engineering | 2002

Assumption generation for software component verification

Dimitra Giannakopoulou; Corina S. Pasareanu; Howard Barringer

Model checking is an automated technique that can be used to determine whether a system satisfies certain required properties. The typical approach to verifying properties of software components is to check them for all possible environments. In reality, however, a component is only required to satisfy properties in specific environments. Unless these environments are formally characterized and used during verification (assume-guarantee paradigm), the results returned by verification can be overly pessimistic. This work defines a framework that brings a new dimension to model checking of software components. When checking a component against a property, our model checking algorithms return one of the following three results: the component satisfies a property for any environment; the component violates the property for any environment; or finally, our algorithms generate an assumption that characterizes exactly those environments in which the component satisfies its required property. Our approach has been implemented in the LTSA tool and has been applied to the analysis of a NASA application.


automated software engineering | 2003

Automated environment generation for software model checking

Oksana Tkachuk; Matthew B. Dwyer; Corina S. Pasareanu

A key problem in model checking open systems is environment modeling (i.e., representing the behavior of the execution context of the system under analysis). Software systems are fundamentally open since their behavior is dependent on patterns of invocation of system components and values defined outside the system but referenced within the system. Whether reasoning about the behavior of whole programs or about program components, an abstract model of the environment can be essential in enabling sufficiently precise yet tractable verification. In this paper, we describe an approach to generating environments of Java program fragments. This approach integrated formally specified assumptions about environment behavior with sound abstractions of environment implementations to form a model of the environment. The approach is implemented in the Bandera environment generator (BEG) which we describe along with our experience using BEG to reason about properties of several nontrivial concurrent Java programs.


international workshop on model checking software | 1999

Assume-Guarantee Model Checking of Software: A Comparative Case Study

Corina S. Pasareanu; Matthew B. Dwyer; Michael Huth

A variety of assume-guarantee model checking approaches have been proposed in the literature. In this paper, we describe several possible implementations of those approaches for checking properties of software components (units) using SPIN and SMV model checkers. Model checking software units requires, in general, the definition of an environment which establishes the run-time context in which the unit executes. We describe how implementations of such environments can be synthesized from specifications of assumed environment behavior written in LTL. Those environments can then be used to check properties that the software unit must guarantee which can be written in LTL or ACTL. We report on several experiments that provide evidence about the relative performance of the different assume-guarantee approaches.


Theoretical Computer Science | 2005

Combining test case generation and runtime verification

Cyrille Artho; Howard Barringer; Allen Goldberg; Klaus Havelund; Sarfraz Khurshid; Michael R. Lowry; Corina S. Pasareanu; Grigore Rosu; Koushik Sen; Willem Visser; Richard Washington

Software testing is typically an ad hoc process where human testers manually write test inputs and descriptions of expected test results, perhaps automating their execution in a regression suite. This process is cumbersome and costly. This paper reports results on a framework to further automate this process. The framework consists of combining automated test case generation based on systematically exploring the input domain of the program with runtime verification, where execution traces are monitored and verified against properties expressed in temporal logic. Capabilities also exist for analyzing traces for concurrency errors, such as deadlocks and data races. The input domain of the program is explored using a model checker extended with symbolic execution. Properties are formulated in an expressive temporal logic. A methodology is advocated that automatically generates properties specific to each input rather than formulating properties uniformly true for all inputs. The paper describes an application of the technology to a NASA rover controller.


formal methods | 2004

Experimental Evaluation of Verification and Validation Tools on Martian Rover Software

Guillaume Brat; Doron Drusinsky; Dimitra Giannakopoulou; Allen Goldberg; Klaus Havelund; Michael R. Lowry; Corina S. Pasareanu; Arnaud Venet; Willem Visser; Richard Washington

We report on a study to determine the maturity of different verification and validation technologies (V&V) applied to a representative example of NASA flight software. The study consisted of a controlled experiment where three technologies (static analysis, runtime analysis and model checking) were compared to traditional testing with respect to their ability to find seeded errors in a prototype Mars Rover controller. What makes this study unique is that it is the first (to the best of our knowledge) controlled experiment to compare formal methods based tools to testing on a realistic industrial-size example, where the emphasis was on collecting as much data on the performance of the tools and the participants as possible. The paper includes a description of the Rover code that was analyzed, the tools used, as well as a detailed description of the experimental setup and the results. Due to the complexity of setting up the experiment, our results cannot be generalized, but we believe it can still serve as a valuable point of reference for future studies of this kind. It confirmed our belief that advanced tools can outperform testing when trying to locate concurrency errors. Furthermore, the results of the experiment inspired a novel framework for testing the next generation of the Rover.


international conference on software engineering | 2013

Reliability analysis in symbolic pathfinder

Antonio Filieri; Corina S. Pasareanu; Willem Visser

Software reliability analysis tackles the problem of predicting the failure probability of software. Most of the current approaches base reliability analysis on architectural abstractions useful at early stages of design, but not directly applicable to source code. In this paper we propose a general methodology that exploit symbolic execution of source code for extracting failure and success paths to be used for probabilistic reliability assessment against relevant usage scenarios. Under the assumption of finite and countable input domains, we provide an efficient implementation based on Symbolic PathFinder that supports the analysis of sequential and parallel programs, even with structured data types, at the desired level of confidence. The tool has been validated on both NASA prototypes and other test cases showing a promising applicability scope.


international conference on software engineering | 2007

Formal Software Analysis Emerging Trends in Software Model Checking

Matthew B. Dwyer; John Hatcliff; Robby; Corina S. Pasareanu; Willem Visser

The study of methodologies and techniques to produce correct software has been active for four decades. During this period, researchers have developed and investigated a wide variety of approaches, but techniques based on mathematical modeling of program behavior have been a particular focus since they offer the promise of both finding errors and assuring important program properties. The past fifteen years have seen a marked and accelerating shift towards algorithmic formal reasoning about program behavior - we refer to these as formal software analysis. In this paper, we define formal software analyses as having several important properties that distinguish them from other forms of software analysis. We describe three foundational formal software analyses, but focus on the adaptation of model checking to reason about software. We review emerging trends in software model checking and identify future directions that promise to significantly improve its cost-effectiveness.


foundations of software engineering | 1998

Filter-based model checking of partial systems

Matthew B. Dwyer; Corina S. Pasareanu

Recent years have seen dramatic growth in the application of model checking techniques to the validation and verification of correctness properties of hardware, and more recently software, systems. Most of this work has been aimed at reasoning about properties of complete systems. This paper describes an automatable approach for building finite-state models of partially defined software systems that are amenable to model checking using existing tools. It enables the application of existing model checking tools to system components taking into account assumptions about the behavior of the environment in which the components will execute. We illustrate the application of the approach by validating and verifying properties of a reusable parameterized programming framework.


formal methods | 2005

Verifying Time Partitioning in the DEOS Scheduling Kernel

John Penix; Willem Visser; Seungjoon Park; Corina S. Pasareanu; Eric Engstrom; Aaron Larson; Nicholas Weininger

This paper describes an experiment to use the Spin model checking system to support automated verification of time partitioning in the Honeywell DEOS real-time scheduling kernel. The goal of the experiment was to investigate whether model checking with minimal abstraction could be used to find a subtle implementation error that was originally discovered and fixed during the standard formal review process. The experiment involved translating a core slice of the DEOS scheduling kernel from C++ into Promela, constructing an abstract “test-driver” environment and carefully introducing several abstractions into the system to support verification. Attempted verification of several properties related to time-partitioning led to the rediscovery of the known error in the implementation. The case study indicated several limitations in existing tools to support model checking of software. The most difficult task in the original DEOS experiment was constructing an adequate environment to close the system for verification. The fidelity of the environment was of crucial importance for achieving meaningful results during model checking. In this paper, we describe the initial environment modeling effort and a follow-on experiment with using semi-automated environment generation methods. Program abstraction techniques were also critical for enabling verification of DEOS. We describe an implementation scheme for predicate abstraction, an approach based on abstract interpretation, which was developed to support DEOS verification.

Collaboration


Dive into the Corina S. Pasareanu's collaboration.

Top Co-Authors

Avatar

Matthew B. Dwyer

University of Nebraska–Lincoln

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sarfraz Khurshid

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Quoc-Sang Phan

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge