Salvatore Distefano
Kazan Federal University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Salvatore Distefano.
network computing and applications | 2009
Vincenzo D. Cunsolo; Salvatore Distefano; Antonio Puliafito; Marco Scarpa
Only commercial Cloud solutions have been implemented so far, offering computing resources and services for renting. Some interesting projects, such as Nimbus, OpenNEbula, Reservoir, work on Cloud. One of their aims is to provide a Cloud infrastructure able to provide and share resources and services for scientific purposes. The encouraging results of Volunteer computing projects in this context and the flexibility of the Cloud, suggested to address our research efforts towards a combined new computing paradigm we named [email protected] one hand it can be considered as a generalization of the @homephilosophy, knocking down the barriers of Volunteer computing, and also allowing to share more general services. On the other hand,Cloud@Home can be considered as the enhancement of the grid-utility vision of Cloud computing. In this new paradigm, users’ hosts are not passive interface to Cloud services anymore, but they can interact (free or by charge) with other Clouds.In this paper we present the Cloud@Home paradigm, highlighting its contribution to the actual state of the art on the topic of distributed and Cloud computing. We detail the functional architecture and the core structure implementing such paradigm, demonstrating how it is really possible to build up a Cloud@Home infrastructure.
grid computing | 2013
Antonio Cuomo; Giuseppe Di Modica; Salvatore Distefano; Antonio Puliafito; Massimiliano Rak; Orazio Tomarchio; Salvatore Venticinque; Umberto Villano
The breakthrough of Cloud comes from its service oriented perspective where everything, including the infrastructure, is provided “as a service”. This model is really attractive and convenient for both providers and consumers, as a consequence the Cloud paradigm is quickly growing and widely spreading, also in non commercial contexts. In such a scenario, we propose to incorporate some elements of volunteer computing into the Cloud paradigm through the Cloud@Home solution, involving into the mix nodes and devices provided by potentially any owners or administrators, disclosing high computational resources to contributors and also allowing to maximize their utilization. This paper presents and discusses the first step towards Cloud@Home: providing quality of service and service level agreement facilities on top of unreliable, intermittent Cloud providers. Some of the main issues and challenges of Cloud@Home, such as the monitoring, management and brokering of resources according to service level requirements are addressed through the design of a framework core architecture. All the tasks committed to the architecture’s modules and components, as well as the most relevant component interactions, are identified and discussed from both the structural and the behavioural viewpoints. Some encouraging experiments on an early implementation prototype deployed in a real testing environment are also documented in the paper.
IEEE Transactions on Computers | 2013
Dario Bruneo; Salvatore Distefano; Francesco Longo; Antonio Puliafito; Marco Scarpa
Cloud computing is a promising paradigm able to rationalize the use of hardware resources by means of virtualization. Virtualization allows to instantiate one or more virtual machines (VMs) on top of a single physical machine managed by a virtual machine monitor (VMM). Similarly to any other software, a VMM experiences aging and failures. Software rejuvenation is a proactive fault management technique that involves terminating an application, cleaning up the system internal state, and restarting it to prevent the occurrence of future failures. In this work, we propose a technique to model and evaluate the VMM aging process and to investigate the optimal rejuvenation policy that maximizes the VMM availability under variable workload conditions. Starting from dynamic reliability theory and adopting symbolic algebraic techniques, we investigate and compare existing time-based VMM rejuvenation policies. We also propose a time-based policy that adapts the rejuvenation timer to the VMM workload condition improving the system availability. The effectiveness of the proposed modeling technique is demonstrated through a numerical example based on a case study taken from the literature.
network computing and applications | 2012
Salvatore Distefano; Giovanni Merlino; Antonio Puliafito
Cloud computing is among the hottest trends in ICT, aiming at providing on-demand computing and storage resources with guarantees on the quality of service. A limit of current Cloud implementations is the absence of mechanisms to effectively manage inputs from the physical world. Our idea is to move towards a pervasive Cloud, providing facilities and solutions able to interact with the surrounding environment enabling development of new and value added services. In this vision also mobile devices, such as PDAs, usually equipped with several sensors and actuators, have to be included in the overall picture. Mobile devices and their respective owners can decide whether, how and when to contribute to the Cloud, thus introducing further unknowns. In order to deal with all such issues, in this paper we propose a solution that gives way to the Sensing and Actuation as a Service (SAaaS) paradigm, a step towards the creation of a Cloud of sensors and actuators. This paper mainly focuses on the implementation of the underlying infrastructure at the basis of the SAaaS. An ad-hoc architecture and some preliminary background on this challenging vision are provided and discussed.
IEEE Transactions on Dependable and Secure Computing | 2009
Salvatore Distefano; Antonio Puliafito
Dependability evaluation is an important often-mandatory step in designing and analyzing (critical) systems. Introducing control and/or computing devices to automate processes increases the system complexity, with an impact on the overall dependability. This occurs as a consequence of interferences, dependencies, and other similar effects that cannot be adequately managed through formalisms such as reliability block diagrams (RBDs), fault trees (FTs), and reliability graphs (RGs), since the statistical independence assumption is not satisfied. In addition, more enhanced notations such as dynamic FTs (DFTs) might not be adequate to represent all the behavioral aspects of dynamic systems. To overcome these problems, we developed a new formalism derived from RBD: the dynamic RBD (DRBD). DRBD exploits the concept of dependence as the building block to represent dynamic behaviors, allowing us to compose the dependencies and adequately manage the arising conflicts by means of a priority algorithm. In this paper, we explain how we can use the DRBD notation by specifying a practical methodology. Starting from the system knowledge, the proposed methodology drives to the overall system reliability evaluation through the entire phases of modeling and analysis. Such a technique is applied to an example taken from the literature, consisting of a distributed computing system.
IEEE Transactions on Software Engineering | 2011
Salvatore Distefano; Marco Scarpa; Antonio Puliafito
In this paper, we present an evaluation methodology to validate the performance of a UML model, representing a software architecture. The proposed approach is based on open and well-known standards: UML for software modeling and the OMG Profile for Schedulability, Performance, and Time Specification for the performance annotations into UML models. Such specifications are collected in an intermediate model, called the Performance Context Model (PCM). The intermediate model is translated into a performance model which is subsequently evaluated. The paper is focused on the mapping from the PCM to the performance domain. More specifically, we adopt Petri nets as the performance domain, specifying a mapping process based on a compositional approach we have entirely implemented in the ArgoPerformance tool. All of the rules to derive a Petri net from a PCM and the performance measures assessable from the former are carefully detailed. To validate the proposed technique, we provide an in-depth analysis of a web application for music streaming.
reliability and maintainability symposium | 2007
Salvatore Distefano; Antonio Puliafito
Reliability block diagrams (RBD), and fault trees (FT) are the most widely used formalisms in system reliability modeling. They implement two different approaches: in a reliability block diagram, the system is represented by components connected according to their function or reliability relationships, while fault trees show which combinations of the components failures will result in a system failure. Although RBD and FT are commonly used, they are limited in their modeling capacity of systems that have no sequential relationships among their component failures. They do not provide any elements or capabilities to model reliability interactions among components or subsystems, or to represent system reliability configuration changing (dynamics), such as: load-sharing, standby redundancy, interferences, dependencies, common cause failures, and so on. To overcome this lack, Dugan et al. developed the dynamic FT (DFT). DFT extend static FT to enable modeling of time dependent failures by introducing new dynamic gates and elements. Following this way, recently we have extended the RBD into the dynamic RBD notation. Many similarities link the DFT and the DRBD formalisms, but, at the same time, one of the aims of DRBD is to extend the DFT capabilities in dynamic behavior modeling. In the paper the comparison between DFT and DRBD is studied in depth, defining a mapping of DFT elements into the DRBD domain, and investigating if and when is possible to invert the translations from DRBD to DFT. These mapping rules are applied to an example drawn from literature to show their effectiveness
Reliability Engineering & System Safety | 2009
Salvatore Distefano; Antonio Puliafito
Abstract Reliability/availability evaluation is an important, often indispensable, step in designing and analyzing (critical) systems, whose importance is constantly growing. When the complexity of a system is high, dynamic effects can arise or become significant. The system might be affected by dependent, cascade, on-demand and/or common cause failures, its units could interfere (load sharing, inter/sequence-dependency), and so on. It is also of great interest to evaluate redundancy and maintenance policies but, since dynamic behaviors usually do not satisfy the stochastic independence assumption, notations such as reliability block diagrams (RBDs), fault trees (FTs) or reliability graphs (RGs) become approximated/simplified techniques, unable to capture dynamic–dependent behaviors. To overcome such problem we developed a new formalism derived from RBDs: the dynamic RBDs (DRBDs). In this paper we explain how the DRBDs notation is able to adequately model and therefore analyze dynamic–dependent behaviors and complex systems. Particular emphasis is given to the modeling and the analysis phases, from both the theoretical and the practical point of views. Several case studies of dynamic–dependent systems, selected from literature and related to different application fields, are proposed. In this way we also compare the DRBDs approach with other methodologies, demonstrating its effectiveness.
It Professional | 2012
Salvatore Distefano; Antonio Puliafito
A new Cloud@Home paradigm aims at merging the benefits of cloud computing-service-oriented interfaces, dynamic service provisioning, and guaranteed QoS-with those of volunteer computing-capitalized idle resources and reduced operational costs.
2005 Workshop on Techniques, Methodologies and Tools for Performance Evaluation of Complex Systems (FIRB-PERF'05) | 2005
Salvatore Distefano; Marco Scarpa; Antonio Puliafito
An early integration of performance considerations inside the software development process has been recognized during last years as an effective approach to speed up the production of high quality and reliable software. In this paper we propose a possible solution to address software performance engineering that evolves through the following phases: system specification using an augmented UML notation, creation of an intermediate performance context model, generation of an equivalent stochastic Petri netmodel whose analytical solution provides the required performance measures. We describe all the steps of the proposed approach and present ArgoPerformance, a tool we have developed that provides a complete graphical environment for software performance evaluation. We present a simple case study and validate our results against those obtained by other authors through simulation.