Pascale Thévenod-Fosse
Centre national de la recherche scientifique
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Pascale Thévenod-Fosse.
[1991] Digest of Papers. Fault-Tolerant Computing: The Twenty-First International Symposium | 1991
Pascale Thévenod-Fosse; Hélène Waeselynck; Yves Crouzet
The fault revealing power of different test patterns derived from ten structural test criteria currently referred to in unit testing is investigated. Experiments performed on four programs that are pieces of a real-life software system from the nuclear field are reported. Three test input generation techniques are studied: (1) deterministic choice, (2) random selection based on an input probability distribution determined according to the adopted structural test criterion, and (3) random selection from a uniform distribution on the input domain. Mutation analysis is used to assess the test set efficiency with respect to error detection. The experimental results involve a total of 2914 mutants. They show that structural statistical testing, which exhibits the highest mutation scores, leaving alive only six from 2816 nonequivalent mutants within short testing times, is the most efficient. A regards unit testing of programs whose structure remains tractable, the experiments show the adequacy of a fault removal strategy combining statistical and deterministic test patterns.<<ETX>>
computer software and applications conference | 2001
Philippe Chevalley; Pascale Thévenod-Fosse
The adoption of the object-oriented (OO) technology for the development of critical software raises important testing issues. This paper addresses one of these issues: how to create effective tests from OO specification documents? More precisely, the paper describes a technique that adapts a probabilistic method, called statistical functional testing, to the generation of test cases from UML state diagrams, using transition coverage as the testing criterion. Emphasis is put on defining an automatic way to produce both the input values and the expected outputs. The technique is automated with the aid of the Rational Software Corporations Rose RealTime tool. An industrial case study from the avionics domain, formally specified and implemented in Java, is used to illustrate the feasibility of the technique at the subsystem level. Results of first test experiments are presented to exemplify the fault revealing power of the created statistical test cases.
International Journal on Software Tools for Technology Transfer | 2003
Philippe Chevalley; Pascale Thévenod-Fosse
Program mutation is a fault-based technique for measuring the effectiveness of test cases that, although powerful, is computationally expensive. The principal expense of mutation is that many faulty versions of the program under test, called mutants, must be created and repeatedly executed. This paper describes a tool, called JavaMut, that implements 26 traditional and object-oriented mutation operators for supporting mutation analysis of Java programs. The current version of that tool is based on syntactic analysis and reflection for implementing mutation operators. JavaMut is interactive; it provides a graphical user interface to make mutation analysis faster and less painful. Thanks to such automated tools, mutation analysis should be achieved within reasonable costs.
Archive | 2002
Andrea Bondavalli; Pascale Thévenod-Fosse
Standard network QoS analysis usually accounts for the infrastructure performance/availability only, with scarce consideration of the user perspective. With reference to the General Packet Radio Service (GPRS), this paper addresses the problem of how to evaluate the impact of system unavailability periods on QoS measures, explicitly accounting for user characteristics. In fact, the ultimate goal of a service provider is user satisfaction, therefore it is extremely important to introduce the peculiarities of the user population when performing system analysis in such critical system conditions as during outages. The lack of service during outages is aggravated by the collision phenomenon determined by accumulated users requests, which (negatively) impacts on the QoS provided by the system for some time after its restart. Then, depending on the specific behavior exhibited by the variety of users, such QoS degradation due to outages may be perceived differently by different user categories. We follow a compositional modeling approach, based on the GPRS and user models; the focus is on the GPRS random access procedure on one side, and different classes of users behavior on the other side. Quantitative analysis, performed using a simulation approach, is carried out, showing the impact of outages on a relevant QoS indicator, in relation with the considered user characteristics and network load.
Archive | 1995
Pascale Thévenod-Fosse; Hélène Waeselynck; Yves Crouzet
Statistical testing is based on a probabilistic generation of test data: structural or functional criteria serve as guides for defining an input profile and a test size. The method is intended to compensate for the imperfect connection of criteria with software faults, and should not be confused with random testing, a blind approach that uses a uniform profile over the input domain. First, the motivation and the theoretical foundation of statistical testing are presented. Then the feasibility of designing statistical test patterns is exemplified on a safety-critical component from the nuclear industry, and the fault-revealing power of these patterns is assessed through experiments conducted at two different levels: (i) unit testing of four functions extracted from the industrial component, statistical test data being designed according to classical structural criteria; (ii) testing of the whole component, statistical test data being designed from behaviour models deduced from the component specification. The results show the high fault-revealing power of statistical testing, and its greater efficiency in comparison to deterministic and random testing.
Empirical Software Engineering | 2007
Hélène Waeselynck; Pascale Thévenod-Fosse; Olfa Abdellatif-Kaddour
This paper investigates a measurement approach to support the implementation of Simulated Annealing (SA) applied to test generation. SA, like other metaheuristics, is a generic technique that must be tuned to the testing problem under consideration. Finding an adequate setting of SA parameters, that will offer good performance for the target problem, is known to be difficult. Our measurement approach is intended to guide the implementation choices to be made. It builds upon advanced research on how to characterize search problems and the dynamics of metaheuristic techniques applied to them. Central to this research is the concept of landscape. Existing measures of landscape have mainly been applied to combinatorial problems considered in complexity theory. We show that some of these measures can be useful for testing problems as well. The diameter and autocorrelation are retained to study the adequacy of alternative settings of SA parameters. A new measure, the Generation Rate of Better Solutions (GRBS), is introduced to monitor convergence of the search process and implement stopping criteria. The measurement approach is experimented on various case studies, and allows us to successfully revisit a problem issued from our previous work on testing control systems.
ieee international symposium on fault tolerant computing | 1997
Pascale Thévenod-Fosse; Hélène Waeselynck
Statistical testing is based on a probabilistic generation of test data: structural or functional criteria serve as guides for defining an input profile and a test size. Previous work has confirmed the high fault revealing power of this approach for procedural programs; it is now investigated for object-oriented programs. A method for incremental statistical testing is defined at the cluster level, based on the class inheritance hierarchy. Starting from the root class of the program, descendant class(es) are gradually added and test data are designed for (i) structural testing of newly defined features and, (ii) regression testing of inherited features. The feasibility of the method is exemplified by a small case study (a Travel Agency) implemented in Eiffel.
international symposium on software reliability engineering | 2001
Philippe Chevalley; Pascale Thévenod-Fosse
This paper presents an empirical study of the effectiveness of test cases generated from UML state diagrams using transition coverage as the testing criterion. The test cases production is mainly based on an adaptation of a probabilistic method, called statistical testing based on testing criteria. This technique was automated with the aid of the Rational Software Corporations Rose RealTime tool. The test strategy investigated combines statistical test cases with (few) deterministic test cases focused on domain boundary values. Its feasibility is exemplified on a research version of an avionics system implemented in Java: the Flight Guidance System case study (14 concurrent state diagrams). Then, the results of an empirical evaluation of the effectiveness of the created test cases are presented. The evaluation was performed using mutation analysis to assess the error detection power of the test cases on more than 1500 faults seeded one by one in the Java source code (115 classes, 6500 LOC). A detailed analysis of the test results allows us to draw first conclusions on the expected strengths and weaknesses of the proposed test strategy.
annual european computer conference | 1991
Pascale Thévenod-Fosse
Random or statistical testing consists in exercising a system by supplying to it valued inputs which are randomly selected according to a defined probability distribution on the input domain. The specific role one can expect from statistical testing in a software validation process is pointed out. The notion of test quality with respect to the experiment goal allows the testing time to be adjusted to a target test quality. Numerical results illustrate the strengths and limits of statistical testing as a software validation tool, in the present state-of-the-art.<<ETX>>
european dependable computing conference | 1999
Hélène Waeselynck; Pascale Thévenod-Fosse
A test strategy is presented which makes use of the information got from OO analysis and design documents to determine the testing levels (unit, integration) and the associated test objectives. It defines solutions for some of the OO testing issues: here, emphasis is put on applications which consist of concurrent objects linked by client-server relationships. Two major concerns have guided the choice of the proposed techniques: component reusability, and nondeterminism induced by asynchronous communication between objects. The strategy is illustrated on a control program for an existing production cell taken from a metal-processing plant in Karlsruhe. The program was developed using the Fusion method and implemented in Ada 95. We used a probabilistic method for generating test inputs, called statistical testing. Test experiments were conducted from the unit to the system levels, and a few errors were detected.