Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Oksana Tkachuk is active.

Publication


Featured researches published by Oksana Tkachuk.


automated software engineering | 2003

Automated environment generation for software model checking

Oksana Tkachuk; Matthew B. Dwyer; Corina S. Pasareanu

A key problem in model checking open systems is environment modeling (i.e., representing the behavior of the execution context of the system under analysis). Software systems are fundamentally open since their behavior is dependent on patterns of invocation of system components and values defined outside the system but referenced within the system. Whether reasoning about the behavior of whole programs or about program components, an abstract model of the environment can be essential in enabling sufficiently precise yet tractable verification. In this paper, we describe an approach to generating environments of Java program fragments. This approach integrated formally specified assumptions about environment behavior with sound abstractions of environment implementations to form a model of the environment. The approach is implemented in the Bandera environment generator (BEG) which we describe along with our experience using BEG to reason about properties of several nontrivial concurrent Java programs.


foundations of software engineering | 2003

Adapting side effects analysis for modular program model checking

Oksana Tkachuk; Matthew B. Dwyer

There is a widely held belief that whole program analysis is intractable for large complex software systems, and there can be little doubt that this is true for program analyses based on model checking. Model checking selected program components that comprise a cohesive unit, however, can be an effective way of uncovering subtle coding errors, especially for components of multi-threaded programs. In this setting, one of the chief problems is how to safely approximate the behavior of the rest of the application as it relates to the unit being analyzed.Non-unit application components are collectively referred to as the environment. In this paper, we describe how points-to and side-effects analyses can be adapted to support generation of summaries of environment behavior that can be reified into Java code using special modeling primitives. The resulting abstract models of the environment can be combined with the code of the unit and then model checked against unit properties. We present our analysis framework, illustrate its flexibility in generating several types of models, and present experience that provides evidence of the scalability of the approach.


automated software engineering | 2004

Analyzing interaction orderings with model checking

Matthew B. Dwyer; Robby; Oksana Tkachuk; Willem Visser

Human-computer interaction (HCI) systems control an ongoing interaction between end-users and computer-based systems. For software-intensive systems, a graphic user interface (GUI) is often employed for enhanced usability. Traditional approaches to validation of GUI aspects in HCI systems involve prototyping and live-subject testing. These approaches are limited in their ability to cover the set of possible human-computer interactions that a system may allow, since patterns of interaction may be long running and have large numbers of alternatives. In this paper, we propose a static analysis that is capable of reasoning about user-interaction properties of GUI portions of HCI applications written in Java using modern GUI frameworks, such as Swing/spl trade/. Our approach consists of partitioning an HCI application into three parts: the Swing library, the GUI implementation, i.e., code that interacts directly with Swing, and the underlying application. We develop models of each of these parts that preserve behavior relevant to interaction ordering. We describe how these models are generated and how we have customized a model checking framework to efficiently analyze their combination.


ACM Sigsoft Software Engineering Notes | 2012

Symbolic quantitative information flow

Quoc-Sang Phan; Pasquale Malacaria; Oksana Tkachuk; Corina S. Păsăreanu

Quantitative Information Flow (QIF) is a powerful approach to quantify leaks of confidential information in a software system. Here we present a novel method that precisely quanties information leaks. In order to mitigate the state-space explosion problem, we propose a symbolic representation of data, and a general SMT-based framework to explore systematically the state space. Symbolic Execution fits well with our framework, so we implement a method of QIF analysis employing Symbolic Execution. We develop our method as a prototype tool that can perform QIF analysis for a software system developed in Java. The tool is built on top of Java Pathfinder, an open source model checking platform, and it is the first tool in the field to support information-theoretic QIF analysis.


international workshop on model checking software | 2013

Regression Verification Using Impact Summaries

John D. Backes; Suzette Person; Neha Rungta; Oksana Tkachuk

Regression verification techniques are used to prove equivalence of closely related program versions. Existing regression verification techniques leverage the similarities between program versions to help improve analysis scalability by using abstraction and decomposition techniques. These techniques are sound but not complete. In this work, we propose an alternative technique to improve scalability of regression verification that leverages change impact information to partition program execution behaviors. Program behaviors in each version are partitioned into (a) behaviors impacted by the changes and (b) behaviors not impacted (unimpacted) by the changes. Our approach uses a combination of static analysis and symbolic execution to generate summaries of program behaviors impacted by the differences. We show in this work that checking equivalence of behaviors in two program versions reduces to checking equivalence of just the impacted behaviors. We prove that our approach is both sound and complete for sequential programs, with respect to the depth bound of symbolic execution; furthermore, our approach can be used with existing approaches to better leverage the similarities between program versions and improve analysis scalability. We evaluate our technique on a set of sequential C artifacts and present preliminary results.


automated software engineering | 2011

JPF-AWT: Model checking GUI applications

Peter C. Mehlitz; Oksana Tkachuk; Mateusz Ujma

Verification of Graphical User Interface (GUI) applications presents many challenges. GUI applications are open systems that are driven by user events. Verification of such applications by means of model checking therefore requires a user model in order to close the state space.


ACM Sigsoft Software Engineering Notes | 2015

Generation of Library Models for Verification of Android Applications

Heila van der Merwe; Oksana Tkachuk; Brink van der Merwe; Willem Visser

Android applications are difficult to verify and test since they have many external dependencies. To overcome this problem, environment generation can be used to create a model of the environment to simulate the behavior of these external dependencies. Creating this environment model manually is a tedious process and although there are many techniques available to generate models, the key lies in identifying how these techniques can be applied to a specific domain. In this paper we discuss two static analysis tools OCSEGen [3] and Modgen [1] and how they can be applied to the Android domain to generate models for specific parts of the environment.


state of the art in java program analysis | 2013

OCSEGen: open components and systems environment generator

Oksana Tkachuk

To analyze a large system, one often needs to break it into smaller components. To analyze a component or unit under analysis, one needs to model its context of execution, called environment, which represents the components with which the unit interacts. Environment generation is a challenging problem, because the environment needs to be general enough to uncover unit errors, yet precise enough to make the analysis tractable. In this paper, we present a tool for automated environment generation for open components and systems. The tool, called OCSEGen, is implemented on top of the Soot framework. We present the tools current support and discuss its possible future extensions.


ACM Sigsoft Software Engineering Notes | 2014

Automated generation of model classes for Java PathFinder

Matteo Ceccarello; Oksana Tkachuk

Model checkers like Java PathFinder (JPF) often have to combat the state space explosion problem. One solution adopted to tackle this problem is to abstract away parts of the system, e. g., to model complex library classes at a higher level of abstraction. The model classes have the same interface as the actual library classes but exhibit reduced be- haviour and state. Writing such model classes is both error prone and time consuming. In this paper we propose a tool that can automatically derive a model class from the original class. To achieve this goal, the tool uses different algorithms, including slicing and value generation, each yielding a model class with different behaviour and state.


nasa formal methods symposium | 2015

Are We There Yet? Determining the Adequacy of Formalized Requirements and Test Suites

Anitha Murugesan; Michael W. Whalen; Neha Rungta; Oksana Tkachuk; Suzette Person; Mats Per Erik Heimdahl; Dongjiang You

Structural coverage metrics have traditionally categorized code as either covered or uncovered. Recent work presents a stronger notion of coverage, checked coverage, which counts only statements whose execution contributes to an outcome checked by an oracle. While this notion of coverage addresses the adequacy of the oracle, for Model-Based Development of safety critical systems, it is still not enough; we are also interested in how much of the oracle is covered, and whether the values of program variables are masked when the oracle is evaluated. Such information can help system engineers identify missing requirements as well as missing test cases. In this work, we combine results from checked coverage with results from requirements coverage to help provide insight to engineers as to whether the requirements or the test suite need to be improved. We implement a dynamic backward slicing technique and evaluate it on several systems developed in Simulink. The results of our preliminary study show that even for systems with comprehensive test suites and good sets of requirements, our approach can identify cases where more tests or more requirements are needed to improve coverage numbers.

Collaboration


Dive into the Oksana Tkachuk's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Matthew B. Dwyer

University of Nebraska–Lincoln

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Heila Botha

Stellenbosch University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge