Marcello Cinque
University of Naples Federico II
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Marcello Cinque.
IEEE Transactions on Computers | 2012
C. Di Martino; Marcello Cinque; Domenico Cotroneo
Wireless Sensor Networks (WSNs) are widely recognized as a promising solution to build next-generation monitoring systems. Their industrial uptake is however still compromised by the low level of trust on their performance and dependability. Whereas analytical models represent a valid mean to assess nonfunctional properties via simulation, their wide use is still limited by the complexity and dynamicity of WSNs, which lead to unaffordable modeling costs. To reduce this gap between research achievements and industrial development, this paper presents a framework for the assessment of WSNs based on the automated generation of analytical models. The framework hides modeling details, and it allows designers to focus on simulation results to drive their design choices. Models are generated starting from a high-level specification of the system and by a preliminary characterization of its fault-free behavior, using behavioral simulators. The benefits of the framework are shown in the context of two case studies, based on the wireless monitoring of civil structures.
dependable systems and networks | 2007
Marcello Cinque; Domenico Cotroneo; Zbigniew Kalbarczyk; Ravishankar K. Iyer
While the new generation of hand-held devices, e.g., smart phones, support a rich set of applications, growing complexity of the hardware and runtime environment makes the devices susceptible to accidental errors and malicious attacks. Despite these concerns, very few studies have looked into the dependability of mobile phones. This paper presents measurement-based failure characterization of mobile phones. The analysis starts with a high level failure characterization of mobile phones based on data from publicly available web forums, where users post information on their experiences in using hand-held devices. This initial analysis is then used to guide the development of a failure data logger for collecting failure-related information on SymbianOS-based smart phones. Failure data is collected from 25 phones (in Italy and USA) over the period of 14 months. Key findings indicate that: (i) the majority of kernel exceptions are due to memory access violation errors (56%) and heap management problems (18%), and (ii) on average users experience a failure (freeze or self shutdown) every 11 days. While the study provide valuable insight into the failure sensitivity of smart-phones, more data and further analysis are needed before generalizing the results.
IEEE Transactions on Software Engineering | 2013
Marcello Cinque; Domenico Cotroneo; Antonio Pecchia
Event logs have been widely used over the last three decades to analyze the failure behavior of a variety of systems. Nevertheless, the implementation of the logging mechanism lacks a systematic approach and collected logs are often inaccurate at reporting software failures: This is a threat to the validity of log-based failure analysis. This paper analyzes the limitations of current logging mechanisms and proposes a rule-based approach to make logs effective to analyze software failures. The approach leverages artifacts produced at system design time and puts forth a set of rules to formalize the placement of the logging instructions within the source code. The validity of the approach, with respect to traditional logging mechanisms, is shown by means of around 12,500 software fault injection experiments into real-world systems.
dependable systems and networks | 2012
Catello Di Martino; Marcello Cinque; Domenico Cotroneo
This paper presents a novel approach to assess time coalescence techniques. These techniques are widely used to reconstruct the failure process of a system and to estimate dependability measurements from its event logs. The approach is based on the use of automatically generated logs, accompanied by the exact knowledge of the ground truth on the failure process. The assessment is conducted by comparing the presumed failure process, reconstructed via coalescence, with the ground truth. We focus on supercomputer logs, due to increasing importance of automatic event log analysis for these systems. Experimental results show how the approach allows to compare different time coalescence techniques and to identify their weaknesses with respect to given system settings. In addition, results revealed an interesting correlation between errors caused by the coalescence and errors in the estimation of dependability measurements.
dependable systems and networks | 2010
Marcello Cinque; Domenico Cotroneo; Roberto Natella; Antonio Pecchia
Event logs are the primary source of data to characterize the dependability behavior of a computing system during the operational phase. However, they are inadequate to provide evidence of software faults, which are nowadays among the main causes of system outages. This paper proposes an approach based on software fault injection to assess the effectiveness of logs to keep track of software faults triggered in the field. Injection results are used to provide guidelines to improve the ability of logging mechanisms to report the effects of software faults. The benefits of the approach are shown by means of experimental results on three widely used software systems.
IEEE Transactions on Network and Service Management | 2009
Paolo Bellavista; Marcello Cinque; Domenico Cotroneo; Luca Foschini
Self-adaptive management and quality adaptation of multimedia services are open challenges in the heterogeneous wireless Internet, where different wireless access points potentially enable anywhere anytime Internet connectivity. One of the most challenging issues is to guarantee streaming continuity with maximum quality, despite possible handoffs at multimedia provisioning time. To enable handoff management to self-adapt to specific application requirements with minimum resource consumption, this paper offers three main contributions. First, it proposes a simple way to specify handoff-related service-level objectives that are focused on quality metrics and tolerable delay. Second, it presents how to automatically derive from these objectives a set of parameters to guide system-level configuration about handoff strategies and dynamic buffer tuning. Third, it describes the design and implementation of a novel handoff management infrastructure for maximizing streaming quality while minimizing resource consumption. Our infrastructure exploits i) experimentally evaluated tuning diagrams for resource management and ii) handoff prediction/awareness. The reported results show the effectiveness of our approach, which permits to achieve the desired quality-delay tradeoff in common Internet deployment environments, even in presence of vertical handoffs.
workshop on middleware for pervasive and ad hoc computing | 2005
Paolo Bellavista; Marcello Cinque; Domenico Cotroneo; Luca Foschini
The overwhelming success of mobile devices and wireless communications is stressing the need for the development of mobility-aware services. Device mobility requires services adapting their behavior to sudden context changes and being aware of handoffs, which introduce unpredictable delays and intermittent discontinuities. Heterogeneity of wireless technologies (Wi-Fi, Bluetooth, 3G) complicates the situation, since a different treatment of context-awareness and handoffs is required for each solution. This paper presents a middleware architecture designed to ease mobility-aware service development. The architecture hides technology-specific mechanisms and offers a set of facilities for context awareness and handoff management. The architecture prototype works with Bluetooth and Wi-Fi, which today represent two of the most widespread wireless technologies. In addition, the paper discusses motivations and design details in the challenging context of mobile multimedia streaming applications.
international conference on software engineering | 2015
Antonio Pecchia; Marcello Cinque; Gabriella Carrozza; Domenico Cotroneo
Practitioners widely recognize the importance of event logging for a variety of tasks, such as accounting, system measurements and troubleshooting. Nevertheless, in spite of the importance of the tasks based on the logs collected under real workload conditions, event logging lacks systematic design and implementation practices. The implementation of the logging mechanism strongly relies on the human expertise. This paper proposes a measurement study of event logging practices in a critical industrial domain. We assess a software development process at Selex ES, a leading Finmeccanica company in electronic and information solutions for critical systems. Our study combines source code analysis, inspection of around 2.3 millions log entries, and direct feedback from the development team to gain process-wide insights ranging from programming practices, logging objectives and issues impacting log analysis. The findings of our study were extremely valuable to prioritize event logging reengineering tasks at Selex ES.
dependable systems and networks | 2006
Marcello Cinque; Domenico Cotroneo; Stefano Russo
This work presents a failure data analysis campaign on Bluetooth personal area networks (PANs) conducted on two kind of heterogeneous testbeds (working for more than one year). The obtained results reveal how failures distribution is characterized and suggest how to improve the dependability of Bluetooth PANs. Specifically, we define the failure model and we then identify the most effective recovery actions and masking strategies that can be adopted for each failure. We then integrate the discovered recovery actions and masking strategies in our testbeds, improving the availability and the reliability of 3.64% (up to 36.6%) and 202% (referred to the mean time to failure), respectively
international symposium on software reliability engineering | 2014
Marcello Cinque; Domenico Cotroneo; Raffaele Della Corte; Antonio Pecchia
The analysis of monitoring data is extremely valuable for critical computer systems. It allows to gain insights into the failure behavior of a given system under real workload conditions, which is crucial to assure service continuity and downtime reduction. This paper proposes an experimental evaluation of different direct monitoring techniques, namely event logs, assertions, and source code instrumentation, that are widely used in the context of critical industrial systems. We inject 12,733 software faults in a real-world air traffic control (ATC) middleware system with the aim of analyzing the ability of mentioned techniques to produce information in case of failures. Experimental results indicate that each technique is able to cover a limited number of failure manifestations. Moreover, we observe that the quality of collected data to support failure diagnosis tasks strongly varies across the techniques considered in this study.