Yvan Labiche
Carleton University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Yvan Labiche.
international conference on software engineering | 2005
James H. Andrews; Lionel C. Briand; Yvan Labiche
The empirical assessment of test techniques plays an important role in software testing research. One common practice is to instrument faults, either manually or by using mutation operators. The latter allows the systematic, repeatable seeding of large numbers of faults; however, we do not know whether empirical results obtained this way lead to valid, representative conclusions. This paper investigates this important question based on a number of programs with comprehensive pools of test cases and known faults. It is concluded that, based on the data available thus far, the use of mutation operators is yielding trustworthy results (generated mutants are similar to real faults). Mutants appear however to be different from hand-seeded faults that seem to be harder to detect than real faults.
IEEE Transactions on Software Engineering | 2006
James H. Andrews; Lionel C. Briand; Yvan Labiche; Akbar Siami Namin
The empirical assessment of test techniques plays an important role in software testing research. One common practice is to seed faults in subject software, either manually or by using a program that generates all possible mutants based on a set of mutation operators. The latter allows the systematic, repeatable seeding of large numbers of faults, thus facilitating the statistical analysis of fault detection effectiveness of test suites; however, we do not know whether empirical results obtained this way lead to valid, representative conclusions. Focusing on four common control and data flow criteria (block, decision, C-use, and P-use), this paper investigates this important issue based on a middle size industrial program with a comprehensive pool of test cases and known faults. Based on the data available thus far, the results are very consistent across the investigated criteria as they show that the use of mutation operators is yielding trustworthy results: generated mutants can be used to predict the detection effectiveness of real faults. Applying such a mutation analysis, we then investigate the relative cost and effectiveness of the above-mentioned criteria by revisiting fundamental questions regarding the relationships between fault detection, test suite size, and control/data flow coverage. Although such questions have been partially investigated in previous studies, we can use a large number of mutants, which helps decrease the impact of random variation in our analysis and allows us to use a different analysis approach. Our results are then; compared with published studies, plausible reasons for the differences are provided, and the research leads us to suggest a way to tune the mutation analysis process to possible differences in fault detection probabilities in a specific environment
international conference on software maintenance | 2003
Lionel C. Briand; Yvan Labiche; L. O'Sullivan
The use of Unified Modeling Language (UML) analysis/design models on large projects leads to a large number of interdependent UML diagrams. As software systems evolve, those diagrams undergo changes to, for instance, correct errors or address changes in the requirements. Those changes can in turn lead to subsequent changes to other elements in the UML diagrams. Impact analysis is then defined as the process of identifying the potential consequences (side-effects) of a change, and estimating what needs to be modified to accomplish a change. In this article, we propose a UML model-based approach to impact analysis that can be applied before any implementation of the changes, thus allowing an early decision-making and change planning process. We first verify that the UML diagrams are consistent (consistency check). Then changes between two different versions of a UML model are identified according to a change taxonomy, and model elements that are directly or indirectly impacted by those changes (i.e., may undergo changes) are determined using formally defined impact analysis rules (written with Object Constraint Language). A measure of distance between a changed element and potentially impacted elements is also proposed to prioritize the results of impact analysis according to their likelihood of occurrence. We also present a prototype tool that provides automated support for our impact analysis strategy, that we then apply on a case study to validate both the implementation and methodology.
international conference on software engineering | 2005
James H. Andrews; Lionel C. Briand; Yvan Labiche
The empirical assessment of test techniques plays an important role in software testing research. One common practice is to instrument faults, either manually or by using mutation operators. The latter allows the systematic, repeatable seeding of large numbers of faults; however, we do not know whether empirical results obtained this way lead to valid, representative conclusions. This paper investigates this important question based on a number of programs with comprehensive pools of test cases and known faults. It is concluded that, based on the data available thus far, the use of mutation operators is yielding trustworthy results (generated mutants are similar to real faults). Mutants appear however to be different from hand-seeded faults that seem to be harder to detect than real faults.
IEEE Transactions on Software Engineering | 2006
Lionel C. Briand; Yvan Labiche; Johanne Leduc
This paper proposes a methodology and instrumentation infrastructure toward the reverse engineering of UML (Unified Modeling Language) sequence diagrams from dynamic analysis. One motivation is, of course, to help people understand the behavior of systems with no (complete) documentation. However, such reverse-engineered dynamic models can also be used for quality assurance purposes. They can, for example, be compared with design sequence diagrams and the conformance of the implementation to the design can thus be verified. Furthermore, discrepancies can also suggest failures in meeting the specifications. Due to size constraints, this paper focuses on the distribution aspects of the methodology we propose. We formally define our approach using metamodels and consistency rules. The instrumentation is based on aspect-oriented programming in order to alleviate the effort overhead usually associated with source code instrumentation. A case study is discussed to demonstrate the applicability of the approach on a concrete example
IEEE Transactions on Software Engineering | 2006
Erik Arisholm; Lionel C. Briand; Siw Elisabeth Hove; Yvan Labiche
The Unified Modeling Language (UML) is becoming the de facto standard for software analysis and design modeling. However, there is still significant resistance to model-driven development in many software organizations because it is perceived to be expensive and not necessarily cost-effective. Hence, it is important to investigate the benefits obtained from modeling. As a first step in this direction, this paper reports on controlled experiments, spanning two locations, that investigate the impact of UML documentation on software maintenance. Results show that, for complex tasks and past a certain learning curve, the availability of UML documentation may result in significant improvements in the functional correctness of changes as well as the quality of their design. However, there does not seem to be any saving of time. For simpler tasks, the time needed to update the UML documentation may be substantial compared with the potential benefits, thus motivating the need for UML tools with better support for software maintenance
IEEE Transactions on Software Engineering | 2004
Lionel C. Briand; M. Di Penta; Yvan Labiche
This work describes an empirical investigation of the cost effectiveness of well-known state-based testing techniques for classes or clusters of classes that exhibit a state-dependent behavior. This is practically relevant as many object-oriented methodologies recommend modeling such components with statecharts which can then be used as a basis for testing. Our results, based on a series of three experiments, show that in most cases state-based techniques are not likely to be sufficient by themselves to catch most of the faults present in the code. Though useful, they need to be complemented with black-box, functional testing. We focus here on a particular technique, Category Partition, as this is the most commonly used and referenced black-box, functional testing technique. Two different oracle strategies have been applied for checking the success of test cases. One is a very precise oracle checking the concrete state of objects whereas the other one is based on the notion of state invariant (abstract states). Results show that there is a significant difference between them, both in terms of fault detection and cost. This is therefore an important choice to make that should be driven by the characteristics of the component to be tested, such as its criticality, complexity, and test budget.
international conference on software maintenance | 2002
Lionel C. Briand; Yvan Labiche; G. Soccar
We present a methodology and a tool to support test selection from regression test suites based on change analysis in object-oriented designs. We assume that designs are represented using the Unified Modeling Language (UML) and we propose a formal mapping between design changes and a classification of regression test cases, i.e., three categories: reusable, retestable, and obsolete. We provide evidence of the feasibility of the methodology and its usefulness by using our prototype tool on an industrial case study.
Lecture Notes in Computer Science | 2001
Lionel C. Briand; Yvan Labiche
System testing is concerned with testing an entire system based on its specifications. In the context of object-oriented, UML development, this means that system test requirements are derived from UML analysis artifacts such as use cases, their corresponding sequence and collaboration diagrams, class diagrams, and possibly the use of the Object Constraint Language across all these artifacts. Our goal is to support the derivation of test requirements, which will be transformed into test cases, test oracles, and test drivers once we have detailed design information.Another important issue we address is the one of testability. Testability requirements (or rules) need to be imposed on UML artifacts so as to be able to support system testing efficiently. Those testability requirements result from a trade-off between analysis and design overhead and improved testability. The potential for automation is also an overriding concern all across our work as the ultimate goal is to fully support testing activities with high-capability tools.
Journal of Systems and Software | 2006
Lionel C. Briand; Yvan Labiche; L. O'Sullivan; Michal M. Sówka
The use of Unified Modeling Language (UML) analysis/design models on large projects leads to a large number of interdependent UML diagrams. As software systems evolve, UML diagrams undergo changes that address error corrections and changed requirements. Those changes can in turn lead to subsequent changes to other elements in the UML diagrams. Impact analysis is defined as the process of identifying the potential consequences (side-effects) of a change, and estimating what needs to be modified to accomplish that change. In this article, we propose a UML model-based approach to impact analysis that can be applied before implementation of changes, thus allowing early decision-making and change planning. We first verify that the UML diagrams in a design model are consistent. Then the changes between two different versions of UML models are automatically identified according to a change taxonomy. Next, model elements which are directly or indirectly impacted by the changes (i.e., may undergo changes) are determined using formally defined impact analysis rules (defined with the Object Constraint Language). A measure of distance between a changed element and potentially impacted elements is also proposed to prioritize the results of impact analysis according to their likelihood of occurrence. We also present a prototype tool that provides automated support for our impact analysis strategy, and two case studies that validate both the methodology and the tool. Empirical results confirm that distance helps determine the likelihood of change in the code.