Stefano Di Alesio
Simula Research Laboratory
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Stefano Di Alesio.
high assurance systems engineering | 2011
Mehrdad Sabetzadeh; Davide Falessi; Lionel C. Briand; Stefano Di Alesio; Dag McGeorge; Vidar Ahjem; Jonas Borg
New technologies typically involve innovative aspects that are not addressed by the existing normative standards and hence are not assessable through common certification procedures. To ensure that new technologies can be implemented in a safe and reliable manner, a specific kind of assessment is performed, which in many industries, e.g., the energy sector, is known as Technology Qualification (TQ). TQ aims at demonstrating with an acceptable level of confidence that a new technology will function within specified limits. Expert opinion plays an important role in TQ, both to identify the safety and reliability evidence that needs to be developed, and to interpret the evidence provided. Hence, it is crucial to apply a systematic process for eliciting expert opinions, and to use the opinions for measuring the satisfaction of a technologys safety and reliability objectives. In this paper, drawing on the concept of assurance cases, we propose a goal-based approach for TQ. The approach, which is supported by a software tool, enables analysts to quantitatively reason about the satisfaction of a technologys overall goals and further to identify the aspects that must be improved to increase goal satisfaction. The three main components enabling quantitative assessment are goal models, expert elicitation, and probabilistic simulation. We report on an industrial pilot study where we apply our approach for assessing a new offshore technology.
model driven engineering languages and systems | 2012
Shiva Nejati; Stefano Di Alesio; Mehrdad Sabetzadeh; Lionel C. Briand
Software safety certification needs to address non-functional constraints with safety implications, e.g., deadlines, throughput, and CPU and memory usage. In this paper, we focus on CPU usage constraints and provide a framework to support the derivation of test cases that maximize the chances of violating CPU usage requirements. We develop a conceptual model specifying the generic abstractions required for analyzing CPU usage and provide a mapping between these abstractions and UML/MARTE. Using this model, we formulate CPU usage analysis as a constraint optimization problem and provide an implementation of our approach in a state-of-the-art optimization tool. We report an application of our approach to a case study from the maritime and energy domain. Through this case study, we argue that our approach (1) can be applied with a practically reasonable overhead in an industrial setting, and (2) is effective for identifying test cases that maximize CPU usage.
international conference on software testing verification and validation | 2012
Stefano Di Alesio; Arnaud Gotlieb; Shiva Nejati; Lionel C. Briand
Safety-critical real-time applications are typically subject to stringent timing constraints which are dictated by the surrounding physical environments. Specifically, tasks in these applications need to finish their execution before given deadlines, otherwise the system is deemed unsafe. It is therefore important to test real-time systems for deadline misses. In this paper, we present a strategy for testing real-time applications that aim sat finding test scenarios in which deadline misses become more likely. We identify such test scenarios by searching the possible ways that a set of real-time tasks can be executed according to the scheduling policy of the operating system on which they are running. We formulate this search problem using a constraint optimization model that includes (1) a set of constraints capturing how a given set of tasks with real-time constraints are executed according to a particular scheduling policy, and (2) a cost function that estimates how likely the given tasks are to miss their deadlines. We implement our constraint optimization model in ILOG SOLVER, apply our model to several examples, and report on the performance results.
ACM Transactions on Software Engineering and Methodology | 2015
Stefano Di Alesio; Lionel C. Briand; Shiva Nejati; Arnaud Gotlieb
Tasks in real-time embedded systems (RTES) are often subject to hard deadlines that constrain how quickly the system must react to external inputs. These inputs and their timing vary in a large domain depending on the environment state and can never be fully predicted prior to system execution. Therefore, approaches for stress testing must be developed to uncover possible deadline misses of tasks for different input arrival times. In this article, we describe stress-test case generation as a search problem over the space of task arrival times. Specifically, we search for worst-case scenarios maximizing deadline misses, where each scenario characterizes a test case. In order to scale our search to large industrial-size problems, we combine two state-of-the-art search strategies, namely, genetic algorithms (GA) and constraint programming (CP). Our experimental results show that, in comparison with GA and CP in isolation, GA+CP achieves nearly the same effectiveness as CP and the same efficiency and solution diversity as GA, thus combining the advantages of the two strategies. In light of these results, we conclude that a combined GA+CP approach to stress testing is more likely to scale to large and complex systems.
mining software repositories | 2016
Thomas Rolfsnes; Leon Moonen; Stefano Di Alesio; Razieh Behjati; Dave W. Binkley
Past research has proposed association rule mining as a means to uncover the evolutionary coupling from a system’s change history. These couplings have various applications, such as improving system decomposition and recommending related changes during development. The strength of the coupling can be characterized using a variety of interestingness measures. Existing recommendation engines typically use only the rule with the highest interestingness value in situations where more than one rule applies. In contrast, we argue that multiple applicable rules indicate increased evidence, and hypothesize that the aggregation of such rules can be exploited to provide more accurate recommendations.To investigate this hypothesis we conduct an empirical study on the change histories of two large industrial systems and four large open source systems. As aggregators we adopt three cumulative gain functions from information retrieval. The experiments evaluate the three using 39 different rule interestingness measures. The results show that aggregation provides a significant impact on most measure’s value and, furthermore, leads to a significant improvement in the resulting recommendation.
ieee international conference on software analysis evolution and reengineering | 2016
Thomas Rolfsnes; Stefano Di Alesio; Razieh Behjati; Leon Moonen; Dave W. Binkley
Software change impact analysis aims to find artifacts potentially affected by a change. Typical approaches apply language-specific static or dynamic dependence analysis, and are thus restricted to homogeneous systems. This restriction is a major drawback given todays increasingly heterogeneous software. Evolutionary coupling has been proposed as a language-agnostic alternative that mines relations between source-code entities from the systems change history. Unfortunately, existing evolutionary coupling based techniques fall short. For example, using Singular Value Decomposition (SVD) quickly becomes computationally expensive. An efficient alternative applies targeted association rule mining, but the most widely known approach (ROSE) has restricted applicability: experiments on two large industrial systems, and four large open source systems, show that ROSE can only identify dependencies about 25% of the time. To overcome this limitation, we introduce TARMAQ, a new algorithm for mining evolutionary coupling. Empirically evaluated on the same six systems, TARMAQ performs consistently better than ROSE and SVD, is applicable 100% of the time, and runs orders of magnitude faster than SVD. We conclude that the proposed algorithm is a significant step forward towards achieving robust change impact analysis for heterogeneous systems.
automated software engineering | 2016
Leon Moonen; Stefano Di Alesio; David W. Binkley; Thomas Rolfsnes
Association rule mining is an unsupervised learning technique that infers relationships among items in a data set. This technique has been successfully used to analyze a systems change history and uncover evolutionary coupling between system artifacts. Evolutionary coupling can, in turn, be used to recommend artifacts that are potentially affected by a given set of changes to the system. In general, the quality of such recommendations is affected by (1) the values selected for various parameters of the mining algorithm, (2) characteristics of the set of changes used to derive a recommendation, and (3) characteristics of the systems change history for which recommendations are generated. In this paper, we empirically investigate the extent to which certain choices for these factors affect change recommendation. Specifically, we conduct a series of systematic experiments on the change histories of two large industrial systems and eight large open source systems, in which we control the size of the change set for which to derive a recommendation, the measure used to assess the strength of the evolutionary coupling, and the maximum size of historical changes taken into account when inferring these couplings. We use the results from our study to derive a number of practical guidelines for applying association rule mining for change recommendation.
source code analysis and manipulation | 2016
Leon Moonen; Stefano Di Alesio; Thomas Rolfsnes; Dave W. Binkley
The goal of Software Change Impact Analysis is to identify artifacts (typically source-code files) potentially affected by a change. Recently, there is an increased interest in mining software change impact based on evolutionary coupling. A particularly promising approach uses association rule mining to uncover potentially affected artifacts from patterns in the systems change history. Two main considerations when using this approach are the history length, the number of transactions from the change history used to identify the impact of a change, and history age, the number of transactions that have occurred since patterns were last mined from the history. Although history length and age can significantly affect the quality of mining results, few guidelines exist on how to best select appropriate values for these two parameters. In this paper, we empirically investigate the effects of history length and age on the quality of change impact analysis using mined evolutionary couplings. Specifically, we report on a series of systematic experiments involving the change histories of two large industrial systems and 17 large open source systems. In these experiments, we vary the length and age of the history used to mine software change impact, and assess how this affects precision and applicability. Results from the study are used to derive practical guidelines for choosing history length and age when applying association rule mining to conduct software change impact analysis.
software engineering and advanced applications | 2015
Sagar Sen; Stefano Di Alesio; Dusica Marijan; Arnab Sarkar
Self-adaptive software adapts its behavior to the operational context via automatic run-time reconfiguration of software components. Particular reconfigurations may negatively affect the system Quality of Service (QoS), and therefore their impact over the system performance needs to be thoroughly evaluated. In this paper, we present an approach, based on Combinatorial Interaction Testing (CIT), that generates a sequence of configurations aimed at evaluating the extent to which reconfigurations affect the system QoS. Specifically, we transform a Classification Tree Models (CTM) of the configurations domain to a Constraint Satisfaction Problem (CSP) in ALLOY, whose solution is a sequence of reconfigurations achieving T-wise coverage between system features, and R-wise coverage between configurations in the sequence. The resolution of the CSP is performed by an incremental growth algorithm that divides the generation of the sequence into sub-problems, and merges the results into a final set of test configurations. Preliminary validation in a self adaptive vision system shows that our methodology effectively identifies critical reconfigurations exhibiting a high variation in QoS. This result encourages the use of CIT as a strategy to evaluate the performance of self-adaptive systems.
Reliability Engineering & System Safety | 2013
Mehrdad Sabetzadeh; Davide Falessi; Lionel C. Briand; Stefano Di Alesio
Abstract New technologies typically involve innovative aspects that are not addressed by the existing normative standards and hence are not assessable through common certification procedures. To ensure that new technologies can be implemented in a safe and reliable manner, a specific kind of assessment is performed, which in many industries, e.g., the energy sector, is known as Technology Qualification (TQ). TQ aims at demonstrating with an acceptable level of confidence that a new technology will function within specified limits. Expert opinion plays an important role in TQ, both to identify the safety and reliability evidence that needs to be developed and to interpret the evidence provided. Since there are often multiple experts involved in TQ, it is crucial to apply a structured process for eliciting expert opinions, and to use this information systematically when analyzing the satisfaction of the technologys safety and reliability objectives. In this paper, we present a goal-based approach for TQ. Our approach enables analysts to quantitatively reason about the satisfaction of the technologys overall goals and further to identify the aspects that must be improved to increase goal satisfaction. The approach is founded on three main components: goal models, expert elicitation, and probabilistic simulation. We describe a tool, named Modus, that we have developed in support of our approach. We provide an extensive empirical validation of our approach through two industrial case studies and a survey.