Andrea Reale
University of Bologna
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Andrea Reale.
IEEE Communications Surveys and Tutorials | 2014
Paolo Bellavista; Antonio Corradi; Andrea Reale
Publish/Subscribe (PUB/SUB) systems have attracted much academic and industrial interest in the last years, with several successful experiences of development and deployment. Notwithstanding this high interest and the relevant research activities accomplished in the field, there are still many open technical challenges calling for additional research efforts. In this paper, we focus on the ability of PUB/SUB infrastructures to offer cost-effective, scalable, and quality-aware data distribution in emerging wide-scale and highly dynamic communication environments, such as those related to the continuous exchange of information between static and mobile nodes in smart-city scenarios. To this purpose, we survey state-of-the-art industrial and academic PUB/SUB solutions, with a strong focus on their support to scalability and quality requirements. We offer a detailed technical analysis of existing mechanisms and techniques for scalable QoS provisioning in PUB/SUB middleware, and we show how different design/implementation details impact the scalability and quality achievable at runtime. At the end of this surveying work, we identify promising guidelines for future research and for PUB/SUB systems extensions to effectively address the technical challenges of scalability and quality.
ieee acm international conference utility and cloud computing | 2014
Paolo Bellavista; Antonio Corradi; Andrea Reale; Nicola Ticca
Distributed Stream Processing Systems (DSPSs) are attracting increasing industrial and academic interest as flexible tools to implement scalable and cost-effective on-line analytics applications over Big Data streams. Often hosted in private/public cloud deployment environments, DSPSs offer data stream processing services that transparently exploit the distributed computing resources made available to them at runtime. Given the volume of data of interest, possible (hard/soft) real-time processing requirements, and the time-variable characteristics of input data streams, it is very important for DSPSs to use smart and innovative scheduling techniques that allocate computing resources properly and avoid static over-provisioning. In this paper, we originally investigate the suitability of exploiting application-level indications about differentiated priorities of different stream processing tasks to enable application-specific DSPS resource scheduling, e.g., Capable of re-shaping processing resources in order to dynamically follow input data peaks of prioritized tasks, with no static over-provisioning. We originally propose a general and simple technique to design and implement priority-based resource scheduling in flow-graph-based DSPSs, by allowing application developers to augment DSPS graphs with priority metadata and by introducing an extensible set of priority schemas to be automatically handled by the extended DSPS. In addition, we show the effectiveness of our approach via its implementation and integration in our Quasit DSPS and through experimental evaluation of this prototype on a real-world stream processing application of Big Data vehicular traffic analysis.
mobile wireless middleware operating systems and applications | 2012
Paolo Bellavista; Antonio Corradi; Andrea Reale
Many academic and industrial research activities have recently recognized the relevance of expressive models and effective frameworks for highly scalable data processing, such as MapReduce. This paper presents the novel Quasit programming model and runtime framework for stream processing in datacenters, with its original capabilities of i) allowing developers to choose among a large set of quality policies to associate with their processing tasks in a fine-grained way, and ii) effectively managing processing execution depending on the associated quality indications. The paper describes the Quasit programming model, via the primary design/implementation choices made in the Quasit runtime framework (available for download from the project Web site) to achieve maximum scalability, flexibility, and reusability. The first experiences with our prototype and the reported experimental results show the feasibility of our approach and its good performance in terms of both limited overhead and horizontal scalability.
ieee international conference on green computing and communications | 2012
Paolo Bellavista; Antonio Corradi; Andrea Reale
Todays stream processing scenarios are characterized by large volumes of data, e.g., generated by cyber-physical systems in a smart city, on which continuous analysis tasks need to be performed, often with very different optimal trade-offs between achieved QoS and associated resource consumption. Here we present the novel Quasit model and framework offering runtime support to stream processing applications. Differently from existing literature, Quasit originally allows advanced QoS-based configuration, which can be used to finely tune the framework to fit highly different real-world situations. The paper describes the architecture and development of the Quasit prototype by offering interesting insights and lessons learned about the most important design/implementation choices made, such as the actor-based threading model, or the QoS enabled inter-process communication based on OMG DDS. The reported experimental results, measured over simple real test beds, show that our Quasit framework implementation can provide a good level of horizontal scalability with limited overhead and good exploitation of dynamically available processing resources.
international symposium on computers and communications | 2014
Roberto Coluccio; Giacomo Ghidini; Andrea Reale; David Levine; Paolo Bellavista; Stephen P. Emmons; Jeffrey O. Smith
In a machine-to-machine (M2M) communications system, the deployed devices relay data from on-board sensors to a back-end application over a wireless network. Since the cellular network provides very good coverage (especially in inhabited areas) and is relatively inexpensive, commercial M2M applications often prefer it to other technologies such as WiFi or satellite links. Unfortunately, having been originally designed with human users in mind, the cellular network provides little support to monitor millions of unattended devices. For this reason, it is extremely important to monitor the underlying signalling traffic to detect misbehaving devices or network problems. In the cellular network used by M2M communications systems, the network elements communicate using the Signalling System #7 (SS7), and a real-life system can generate tens of millions of SS7 messages per hour. This paper reports the results of our practical investigation on the possibility to use distributed stream processing systems (DSPSs) to perform real-time analysis of SS7 traffic in a commercial M2M communications system consisting of hundreds of thousands of devices. Through a thorough experimental evaluation based on the analysis of real-world SS7 traces, we present and compare the implementations of a DSPS-based data analysis application on top of either the well-known Storm DSPS or the Quasit middleware. The results show that, by using DSPS services, we are able to largely meet the real-time processing requirements of our use-case scenario.
international middleware conference | 2013
Paolo Bellavista; Antonio Corradi; Spyros Kotoulas; Andrea Reale
A growing number of applications require continuous processing of high-throughput data streams, e.g., financial analysis, network traffic monitoring, or Big Data analytics in smart cities. Stream processing applications typically have explicit quality-of-service requirements; yet, due to the high time-variability of stream characteristics, it is inefficient and sometimes impossible to statically allocate all the resources needed to guarantee application SLAs. In this work, we present DARM, a novel middleware for adaptive replication that trades fault-tolerance for increased capacity during load spikes and provides guaranteed upper-bounds on information loss in case of failures.
ICST Transactions on Mobile Communications and Applications | 2013
Paolo Bellavista; Antonio Corradi; Andrea Reale
Crowdsensing is emerging as a powerful paradigm capable of leveraging the collective, though imprecise, monitoring capabilities of common people carrying smartphones or other personal devices, which can effectively become real-time mobile sensors, collecting information about the physical places they live in. This unprecedented amount of information, considered collectively, offers new valuable opportunities to understand more thoroughly the environment in which we live and, more importantly, gives the chance to use this deeper knowledge to act and improve, in a virtuous loop, the environment itself. However, managing this process is a hard technical challenge, spanning several socio-technical issues: here, we focus on the related quality, reliability, and scalability trade-offs by proposing an architecture for crowdsensing platforms that dynamically self-configure and self-adapt depending on application-specific quality requirements. In the context of this general architecture, the paper will specifically focus on the Quasit distributed stream processing middleware, and show how Quasit can be used to process and analyze crowdsensing-generated data flows with differentiated quality requirements in a highly scalable and reliable way.
Handbook on Data Centers | 2015
Paolo Bellavista; Antonio Corradi; Andrea Reale
In this chapter we analyze the state-of-the-art of distributed stream processing systems, with a strong focus on the characteristics that make them more or less suitable to serve the novel processing needs of Smart City scenarios. In particular, we concentrate on the ability to offer differentiated Quality of Service (QoS). A growing number of Smart City applications, in fact, including those in the security, healthcare, or financial areas, require configurable and predictable behavior. For this reason, a key factor for the success of new and original stream processing supports will be their ability to efficiently meet those needs, while still being able to scale to fast growing workloads.
integration of ai and or techniques in constraint programming | 2014
Andrea Reale; Paolo Bellavista; Antonio Corradi; Michela Milano
A growing number of applications require continuous processing of high-throughput data streams, e.g., financial analysis, network traffic monitoring, or big data analytics. Performing these analyses by using Distributed Stream Processing Systems (DSPSs) in large clusters is emerging as a promising solution to address the scalability challenges posed by these kind of scenarios. Yet, the high time-variability of stream characteristics makes it very inefficient to statically allocate the data-center resources needed to guarantee application Service Level Agreements (SLAs) and calls for original, dynamic, and adaptive resource allocation strategies. In this paper we analyze the problem of planning adaptive replication strategies for DSPS applications under the challenging assumption of minimal statistical knowledge of input characteristics. We investigate and evaluate how different CP techniques can be employed, and quantitatively show how different alternatives offer different trade-offs between problem solution time and stream processing runtime cost through experimental results over realistic testbeds.
international symposium on computers and communications | 2011
Paolo Bellavista; Antonio Corradi; Andrea Reale
There is a clear and widely recognized trend toward a growing and unprecedentedly large amount of user-generated content, which users are willing to share in an easy, cheap, and immediate way. This poses novel hard technical challenges for Peer-to-Peer (P2P) content distribution. We claim that a crucial technical factor to spread even more P2P distribution of multimedia content, is the availability of effective solutions to make rich metadata promptly accessible to users. However, state-of-the-art research and industrial practices are still too weakly addressing the problem and, to the best of our knowledge, none of the existing solutions offers an adequate support for metadata distribution in P2P networks. This paper presents the design and implementation of a prototype (called Metis and available for download) for metadata dissemination in P2P overlay networks. Metis proposes several original contributions: it is fully decentralized; it exploits a set of dynamically selectable/configurable epidemic dissemination protocols; it can be easily integrated on top of existing P2P overlays, such as Tribler. The reported experimental results show the feasibility of our approach, which achieves good dissemination coverage and promptness with very limited overhead.