Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Orna Raz is active.

Publication


Featured researches published by Orna Raz.


international conference on software testing verification and validation | 2011

Test Coverage of Data-Centric Dynamic Compositions in Service-Based Systems

Waldemar Hummer; Orna Raz; Onn Shehory; Philipp Leitner; Schahram Dustdar

This paper addresses the problem of integration testing of data-centric dynamic compositions in service-based systems. These compositions define abstract services, which are replaced by invocations to concrete candidate services at runtime. Testing all possible runtime instances of a composition is often unfeasible. We regard data dependencies between services as potential points of failure, and introduce the k-node data flow test coverage metric. Limiting the level of desired coverage helps to significantly reduce the search space of service combinations. We formulate the problem of generating a minimum set of test cases as a combinatorial optimization problem. Based on the formalization we present a mapping of the problem to the data model of FoCuS, a coverage analysis tool developed at IBM. FoCuS can efficiently compute near-optimal solutions, which we then use to automatically generate and execute test instances of the composition. We evaluate our prototype implementation using an illustrative scenario to show the end-to-end practicability of the approach.


Software Testing, Verification & Reliability | 2013

Testing of data‐centric and event‐based dynamic service compositions

Waldemar Hummer; Orna Raz; Onn Shehory; Philipp Leitner; Schahram Dustdar

This paper addresses integration testing of data‐centric and event‐based dynamic service compositions. The compositions under test define abstract services that are replaced by concrete candidate services at runtime. Testing all possible instantiations of a composition leads to combinatorial explosion and is often infeasible. We consider data dependencies between services as potential points of failure and introduce the k‐node data flow test coverage metric, which helps to significantly reduce the number of test combinations. We formulate a combinatorial optimization problem for generating minimal sets of test cases. On the basis of this formalization, we present a mapping to the model of FoCuS, a coverage analysis tool. FoCuS efficiently computes near‐optimal solutions, which are used to automatically generate test instances. The proposed approach is applicable to various composition paradigms. We illustrate the end‐to‐end practicability based on an integrated scenario, which uses two diverse composition techniques: on the one hand, the Web Services Business Process Execution Language and on the other hand, WS‐Aggregation, a platform for event‐based service composition. Copyright


international symposium on software testing and analysis | 2009

Advanced code coverage analysis using substring holes

Yoram Adler; Eitan Farchi; Moshe Klausner; Dan Pelleg; Orna Raz; Moran Shochat; Shmuel Ur; Aviad Zlotnick

Code coverage is a common aid in the testing process. It is generally used for marking the source code segments that were executed and, more importantly, those that were not executed. Many code coverage tools exist, supporting a variety of languages and operating systems. Unfortunately, these tools provide little or no assistance when code coverage data is voluminous. Such quantities are typical of system tests and even for earlier testing phases. Drill-down capabilities that look at different granularities of the data, starting with directories and going through files to functions and lines of source code, are insufficient. Such capabilities make the assumption that the coverage issues themselves follow the code hierarchy. We argue that this is not the case for much of the uncovered code. Two notable examples are error handling code and platform-specific constructs. Both tend to be spread throughout the source in many files, even though the related coverage, or lack thereof, is highly dependent. To make the task more manageable, and therefore more likely to be performed by users, we developed a hole analysis algorithm and tool that is based on common substrings in the names of functions. We tested its effectiveness using two large IBM software systems. In both of them, we asked domain experts to judge the results of several hole-ranking heuristics. They found that 57% - 87% of the 30 top-ranked holes identified by the effective heuristics are relevant. Moreover, these holes are often unexpected. This is especially impressive because substring hole analysis relies only on the names of functions, whereas domain experts have a broad and deep understanding of the system. We grounded our results in a theoretical framework that states desirable mathematical properties of hole ranking heuristics. The empirical results show that heuristics with these properties tend to perform better, and do so more consistently, than heuristics lacking them.


international conference on software engineering | 2009

Automated substring hole analysis

Yoram Adler; Eitan Farchi; Moshe Klausner; Dan Pelleg; Orna Raz; Moran Shochat; Shmuel Ur; Aviad Zlotnick

Code coverage is a common measure for quantitatively assessing the quality of software testing. Code coverage indicates the fraction of code that is actually executed by tests in a test suite. While code coverage has been around since the 60s there has been little work on how to effectively analyze code coverage data measured in system tests. Raw data of this magnitude, containing millions of data records, is often impossible for a human user to comprehend and analyze. Even drill-down capabilities that enable looking at different granularities starting with directories and going through files to lines of source code are not enough. Substring hole analysis is a novel method for viewing the coverage of huge data sets. We have implemented a tool that enables automatic substring hole analysis. We used this tool to analyze coverage data of several large and complex IBM software systems. The tool identified coverage holes that suggested interesting scenarios that were untested.


2009 ICSE Workshop on Wikis for Software Engineering | 2009

An effective method for keeping design artifacts up-to-date

Yochai Ben-Chaim; Eitan Farchi; Orna Raz

A major problem in the software development process is that design documents are rarely kept up-to-date with the implementation, and thus become irrelevant for extracting test plans or reviews. Furthermore, design documents tend to become very long and often impossible to review and comprehend.


foundations of software engineering | 2016

Cluster-based test suite functional analysis

Marcel Zalmanovici; Orna Raz; Rachel Tzoref-Brill

A common industrial challenge is that of analyzing large legacy free text test suites in order to comprehend their functional content. The analysis results are used for different purposes, such as dividing the test suite into disjoint functional parts for automation and management purposes, identifying redundant test cases, and extracting models for combinatorial test generation while reusing the legacy test suite. Currently the analysis is performed manually, which hinders the ability to analyze many such large test suites due to time and resource constraints. We report on our practical experience in automated analysis of real-world free text test suites from six different industrial companies. Our novel, cluster-based approach provides significant time savings for the analysis of the test suites, varying from a reduction of 35% to 97% compared to the human time required, thus enabling functional analysis in many cases where manual analysis is infeasible in practice.


Proceedings of SYSTOR 2009: The Israeli Experimental Systems Conference on | 2009

Hardware-less testing for RAS software

Aviad Zlotnick; Orna Raz

Reliability Accessibility and Serviceability (RAS) software deals with hardware-related processes that typically include manual operations such as replacing components. The necessity to perform manual operations inhibits automated tests, reduces the scope of unit testing, and makes it challenging to create a regression test suite for RAS. We define Small Scale Simulation (S3), a novel and cost effective type of testing harness whose abstraction level lies between full simulation and mock objects. We describe our experience in creating, deploying, using, and maintaining a small scale simulation system for testing the RAS subsystem of an enterprise storage controller. By replacing physical operations with logical commands, this small scale simulation system enables early release of code related to new hardware feature, and the creation of an automatic regression test suite.


haifa verification conference | 2007

The advantages of post-link code coverage

Orna Raz; Moshe Klausner; Nitzan Peleg; Gadi Haber; Eitan Farchi; Shachar Fienblit; Yakov S. Filiarsky; Shay Gammer; Sergey Novikov

Code coverage is often defined as a measure of the degree to which the source code of a program has been tested [19]. Various metrics for measuring code coverage exist. The vast majority of these metrics require instrumenting the source code to produce coverage data. However, for certain coverage metrics, it is also possible to instrument object code to produce coverage data. Traditionally, such instrumentation has been considered inferior to source level instrumentation because source code is the focus of code coverage. Our experience shows that object code instrumentation, specifically post-link instrumentation, can be very useful to users. Moreover, it does not only alleviate certain side-effects of source-level instrumentation, especially those related to compiler optimizations, but also lends itself to performance optimization that enables low-overhead instrumentation. Our experiments show an average of less than 1% overhead for instrumentation at the function level and an average of 4.1% and 0.4% overhead for SPECint2000 and SPECfp2000, respectively, for instrumentation at the basic block level. This paper demonstrates the advantages of post-link coverage and describes effective methodology and technology for applying it.


international conference on software engineering | 2017

Proactive and pervasive combinatorial testing

Dale E. Blue; Orna Raz; Rachel Tzoref-Brill; Paul Wojciak; Marcel Zalmanovici

Combinatorial testing (CT) is a well-known technique for improving the quality of test plans while reducing testing costs. Traditionally, CT is used by testers at testing phase to design a test plan based on a manual definition of the test space. In this work, we extend the traditional use of CT to other parts of the development life cycle. We use CT at early design phase to improve design quality. We also use CT after test cases have been created and executed, in order to find gaps between design and test. For the latter use case we deploy a novel technique for a semi-automated definition of the test space, which significantly reduces the effort associated with manual test space definition. We report on our practical experience in applying CT for these use cases to three large and heavily deployed industrial products. We demonstrate the value gained from extending the use of CT by (1) discovering latent design flaws with high potential impact, and (2) correlating CT-uncovered gaps between design and test with field reported problems.


haifa verification conference | 2012

Reducing costs while increasing quality

Orna Raz

Non mission critical software systems have been challenged with conflicting requirements. On the one hand, these systems are becoming more and more complex and their quality is of paramount importance. On the other hand, to maintain competitiveness, there is a constant pressure to reduce the cost associated with developing such systems. In this talk, I will raise some of the research questions stemming from these conflicting requirements. I will also present promising approaches to addressing the challenges of reduced costs while increasing quality that were explored at IBM Research.

Researchain Logo
Decentralizing Knowledge