Alberto Gonzalez-Sanchez
Delft University of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Alberto Gonzalez-Sanchez.
automated software engineering | 2011
Alberto Gonzalez-Sanchez; Hans-Gerhard Gross; Arjan J. C. van Gemund
In practically all development processes, regression tests are used to detect the presence of faults after a modification. If faults are detected, a fault localization algorithm can be used to reduce the manual inspection cost. However, while using test case prioritization to enhance the rate of fault detection of the test suite (e.g., statement coverage), the diagnostic information gain per test is not optimal, which results in needless inspection cost during diagnosis. We present RAPTOR, a test prioritization algorithm for fault localization, based on reducing the similarity between statement execution patterns as the testing progresses. Unlike previous diagnostic prioritization algorithms, RAPTOR does not require false negative information, and is much less complex. Experimental results from the Software Infrastructure Repositorys benchmarks show that RAPTOR is the best technique under realistic conditions, with average cost reductions of 40% with respect to the next best technique, with negligible impact on fault detection capability.
international conference on quality software | 2010
Alberto Gonzalez-Sanchez; Éric Piel; Hans-Gerhard Gross; Arjan J. C. van Gemund
Test prioritization techniques select test cases that maximize the confidence on the correctness of the system when the resources for quality assurance (QA) are limited. In the event of a test failing, the fault at the root of the failure has to be localized, adding an extra debugging cost that has to be taken into account as well. However, test suites that are prioritized for failure detection can reduce the amount of useful information for fault localization. This deteriorates the quality of the diagnosis provided, making the subsequent debugging phase more expensive, and defeating the purpose of the test cost minimization. In this paper we introduce a new test case prioritization approach that maximizes the improvement of the diagnostic information per test. Our approach minimizes the loss of diagnostic quality in the prioritized test suite. When considering QA cost as the combination of testing cost and debugging cost, on the Siemens set, the results of our test case prioritization approach show up to a 53% reduction of the overall QA cost, compared with the next best technique.
Software - Practice and Experience | 2011
Alberto Gonzalez-Sanchez; Éric Piel; Hans-Gerhard Gross; Arjan J. C. van Gemund
During regression testing, test prioritization techniques select test cases that maximize the confidence on the correctness of the system when the resources for quality assurance (QA) are limited. In the event of a test failing, the fault at the root of the failure has to be localized, adding an extra debugging cost that has to be taken into account as well. However, test suites that are prioritized for failure detection can reduce the amount of useful information for fault localization. This deteriorates the quality of the diagnosis provided, making the subsequent debugging phase more expensive, and defeating the purpose of the test cost minimization. In this paper we introduce a new test case prioritization approach that maximizes the improvement of the diagnostic information per test. Our approach minimizes the loss of diagnostic quality in the prioritized test suite. When considering QA cost as a combination of testing cost and debugging cost, on our benchmark set, the results of our test case prioritization approach show reductions of up to 60% of the overall combined cost of testing and debugging, compared with the next best technique. Copyright
self-adaptive and self-organizing systems | 2011
Éric Piel; Alberto Gonzalez-Sanchez; Hans-Gerhard Gross; Arjan J. C. van Gemund
An essential requirement for the operation of self-adaptive systems is information about their internal health state, i.e., the extent to which the constituent software and hardware components are still operating reliably. Accurate health information enables systems to recover automatically from (intermittent) failures in their components through selective restarting, or self-reconfiguration. This paper explores and assesses the utility of Spectrum-based Fault localisation (SFL) combined with automatic health monitoring for self-adaptive systems. Their applicability is evaluated through simulation of online diagnosis scenarios, and through implementation in an adaptive surveillance system inspired by our industrial partner. The results of the studies performed confirm that the combination of SFL with online monitoring can successfully provide health information and locate problematic components, so that adequate self-* techniques can be deployed.
acm symposium on applied computing | 2011
Alberto Gonzalez-Sanchez; Hans-Gerhard Gross; Arjan J. C. van Gemund
When failures occur during software testing, automated software fault localization helps to diagnose their root causes and identify the defective statements of a program to support debugging. Diagnosis is carried out by selecting test cases in such way that their pass or fail information will narrow down the set of fault candidates, and, eventually, pinpoint the root cause. An essential in gredient of effective and efficient fault localization is knowledge about the false negative rate of tests, which is related to the rate at which defective statements of a program will exhibit failures. In current fault localization processes, false negative rates are either ignored completely, or merely estimated a posteriori as part of the diagnosis. In this paper, we study the reduction in diagnosis effort when false negative rates are known a priori. We deduce this information from testability, following the propagation-infection-execution (PIE) approach. Experiments with real programs suggest significant improvement in the diagnosis process, both in the single and the multiple-fault cases. When compared to the next-best technique, PIE-based false negative rate information yields a fault localization effort reduction of up to 80% for systems with only one fault, and up to 60% for systems with multiple faults.
predictive models in software engineering | 2010
Alberto Gonzalez-Sanchez; Arjan J. C. van Gemund
Background: Automated diagnosis of software defects can drastically increase debugging efficiency, improving reliability and time-to-market. Current, low-cost, automatic fault diagnosis techniques, such as spectrum-based fault localization (SFL), merely use information on whether a component is involved in a passed/failed run or not. However, these approaches ignore information on component execution frequency, which can improve the accuracy of the diagnostic process. Aim: In this paper, we study the impact of exploiting component execution frequency on the diagnostic quality. Method: We present a reasoning-based SFL approach, dubbed Zoltar-C, that exploits not only component involvement but also their frequency, using an approximate, Bayesian approach to compute the probabilities of the diagnostic candidates. Zoltar-C is evaluated and compared to other well-known, low-cost techniques (such as Tarantula) using a set of programs available from the Software Infrastructure Repository. Results: Results show that, although theoretically Zoltar-C can be of added value, exploiting component frequency does not improve diagnostic accuracy on average. Conclusions: The major reason for this unexpected result is the highly biased sample of passing and failing tests provided with the programs under analysis. In particular, the ratio between passing and failing runs, which has a major impact on the probability computations, does not correspond to the false negative (failure) rates associated with the actually injected faults.
international conference on testing software and systems | 2010
Éric Piel; Alberto Gonzalez-Sanchez; Hans-Gerhard Gross
Modern large-scale component-based applications and service ecosystems are built following a number of different component models and architectural styles, such as the data-flow architectural style. In this style, each building block receives data from a previous one in the flow and sends output data to other components. This organisation expresses information flows adequately, and also favours decoupling between the components, leading to easier maintenance and quicker evolution of the system. Integration testing is a major means to ensure the quality of large systems. Their size and complexity, together with the fact that they are developed and maintained by several stake holders, make Built-In Testing (BIT) an attractive approach to manage their integration testing. However, so far no technique has been proposed that combines BIT and data-flow integration testing. We have introduced the notion of a virtual component in order to realize such a combination. It permits to define the behaviour of several components assembled to process a flow of data, using BIT. Test-cases are defined in a way that they are simple to write and flexible to adapt. We present two implementations of our proposed virtual component integration testing technique, and we extend our previous proposal to detect and handle errors in the definition by the user. The evaluation of the virtual component testing approach suggests that more issues can be detected in systems with data-flows than through other integration testing approaches.
Proceedings of the 2009 ESEC/FSE workshop on Software integration and evolution @ runtime | 2009
Éric Piel; Alberto Gonzalez-Sanchez
Systems of Systems are large-scale information centric component-based systems. Because they can be more easily expressed as an information flow, they are built following the data-flow paradigm. These systems present high availability requirements that make their runtime evolution necessary. This means that integration and system testing will have to be performed at runtime as well. Already existing techniques for runtime integration and testing are usually focused on component-based systems which follow the client-server paradigm, and are not well suited for data-flow systems. In this paper we present virtual components, a way of defining units of data-flow behaviour that greatly simplifies the definition and maintenance of integration tests when the system evolves at runtime. We present and discuss an example of how to use virtual components for this purpose.
Situation Awareness with Systems of Systems | 2013
Alberto Gonzalez-Sanchez; Éric Piel; Hans-Gerhard Gross; Arjan J. C. van Gemund
Maritime Safety and Security Systems of Systems (MSS SoS) evolve dynamically during operation, i.e., at runtime. After each runtime evolution, the quality assurance of the integrated system of systems has to be verified again. It is therefore necessary to devise an appropriate verification strategy that not only achieves this goal, but also minimizes the cost, e.g., time, resources, disruption, of checking after each modification. During testing, test prioritization techniques heuristically select test cases to minimize the time to detect the presence of a fault. However, this obviates that once a fault has been detected, it must be localized and isolated/repaired. Test suites prioritized for fault detection can reduce the amount of useful information for fault localization, increasing the cost of fault localization, e.g., with respect to randomly chosen tests. In this chapter we introduce fault localization prioritization and two new test case prioritization heuristics that greatly reduce the cost of fault localization (up to 80%) with almost no increase on the fault detection cost.
international conference on software testing verification and validation workshops | 2011
Alberto Gonzalez-Sanchez; Hans-Gerhard Gross; Arjan J. C. van Gemund
Diagnostic performance, measured in terms of the manual effort developers have to spend after faults are detected, is not the only important quality of a diagnosis. Efficiency, i.e., the number of tests and the rate of convergence to the final diagnosis is a very important quality of a diagnosis as well. In this paper we present an analytical model and a simulation model to predict the diagnostic efficiency of test suites when prioritized with the information gain algorithm. We show that, besides the size of the system itself, an optimal coverage density and uniform coverage distribution are needed to achieve an efficient diagnosis. Our models allow us to decide whether using IG with our current test suite will provide a good diagnostic efficiency, and enable us to define criteria for the generation or improvement of test suites.