Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Alessandro Margara is active.

Publication


Featured researches published by Alessandro Margara.


ACM Computing Surveys | 2012

Processing flows of information: From data stream to complex event processing

Gianpaolo Cugola; Alessandro Margara

A large number of distributed applications requires continuous and timely processing of information as it flows from the periphery to the center of the system. Examples include intrusion detection systems which analyze network traffic in real-time to identify possible attacks; environmental monitoring applications which process raw data coming from sensor networks to identify critical situations; or applications performing online analysis of stock prices to identify trends and forecast future values. Traditional DBMSs, which need to store and index data before processing it, can hardly fulfill the requirements of timeliness coming from such domains. Accordingly, during the last decade, different research communities developed a number of tools, which we collectively call Information flow processing (IFP) systems, to support these scenarios. They differ in their system architecture, data model, rule model, and rule language. In this article, we survey these systems to help researchers, who often come from different backgrounds, in understanding how the various approaches they adopt may complement each other. In particular, we propose a general, unifying model to capture the different aspects of an IFP system and use it to provide a complete and precise classification of the systems and mechanisms proposed so far.


Journal of Web Semantics | 2014

Streaming the Web

Alessandro Margara; Jacopo Urbani; Frank van Harmelen; Henri E. Bal

In the last few years a new research area, called stream reasoning, emerged to bridge the gap between reasoning and stream processing. While current reasoning approaches are designed to work on mainly static data, the Web is, on the other hand, extremely dynamic: information is frequently changed and updated, and new data is continuously generated from a huge number of sources, often at high rate. In other words, fresh information is constantly made available in the form of streams of new data and updates.Despite some promising investigations in the area, stream reasoning is still in its infancy, both from the perspective of models and theories development, and from the perspective of systems and tools design and implementation.The aim of this paper is threefold: (i)?we identify the requirements coming from different application scenarios, and we isolate the problems they pose; (ii)?we survey existing approaches and proposals in the area of stream reasoning, highlighting their strengths and limitations; (iii)?we draw a research agenda to guide the future research and development of stream reasoning. In doing so, we also analyze related research fields to extract algorithms, models, techniques, and solutions that could be useful in the area of stream reasoning.


Journal of Systems and Software | 2012

Complex event processing with T-REX

Gianpaolo Cugola; Alessandro Margara

Several application domains involve detecting complex situations and reacting to them. This asks for a Complex Event Processing (CEP) middleware specifically designed to timely process large amounts of event notifications as they flow from the peripheral to the center of the system, to identify the composite events relevant for the application. To answer this need we designed T-Rex, a new CEP middleware that combines expressiveness and efficiency. On the one hand, it adopts a language (TESLA) explicitly conceived to easily and naturally describe composite events. On the other hand, it provides an efficient event detection algorithm based on automata to interpret TESLA rules. Our evaluation shows that the T-Rex engine can process a large number of complex rules with a reduced overhead, even in the presence of challenging workloads.


international semantic web conference | 2013

DynamiTE: Parallel Materialization of Dynamic RDF Data

Jacopo Urbani; Alessandro Margara; Ceriel J. H. Jacobs; Frank van Harmelen; Henri E. Bal

One of the main advantages of using semantically annotated data is that machines can reason on it, deriving implicit knowledge from explicit information. In this context, materializing every possible implicit derivation from a given input can be computationally expensive, especially when considering large data volumes. Most of the solutions that address this problem rely on the assumption that the information is static, i.e., that it does not change, or changes very infrequently. However, the Web is extremely dynamic: online newspapers, blogs, social networks, etc., are frequently changed so that outdated information is removed and replaced with fresh data. This demands for a materialization that is not only scalable, but also reactive to changes. In this paper, we consider the problem of incremental materialization, that is, how to update the materialized derivations when new data is added or removed. To this purpose, we consider the i¾?df RDFS fragment [12], and present a parallel system that implements a number of algorithms to quickly recalculate the derivation. In case new data is added, our system uses a parallel version of the well-known semi-naive evaluation of Datalog. In case of removals, we have implemented two algorithms, one based on previous theoretical work, and another one that is more efficient since it does not require a complete scan of the input. We have evaluated the performance using a prototype system called DynamiTE, which organizes the knowledge bases with a number of indices to facilitate the query process and exploits parallelism to improve the performance. The results show that our methods are indeed capable to recalculate the derivation in a short time, opening the door to reasoning on much more dynamic data than is currently possible.


international symposium on computers and communications | 2009

Context-aware publish-subscribe: Model, implementation, and evaluation

Gianpaolo Cugola; Alessandro Margara; Matteo Migliavacca

Complex communication patterns often need to take into account the situation in which the information to be communicated is produced or consumed. Publish-subscribe, and particularly its content-based incarnation, is often used to convey this information by encoding the “context” of the publisher into the published messages. In this paper we claim that this approach is limiting and inefficient and propose a context-aware publish-subscribe model of communication as a better alternative. We describe a protocol that implements such model in a distributed publish-subscribe middleware, and analyze how it performs w.r.t. traditional content-based routing.


Journal of Parallel and Distributed Computing | 2012

Low latency complex event processing on parallel hardware

Gianpaolo Cugola; Alessandro Margara

Most complex information systems are event driven: each part of the system reacts to the events happening in the other parts, potentially generating new events. Complex event processing (CEP) engines in charge of interpreting, filtering, and combining primitive events to identify higher level composite events according to a set of rules are the new breed of message-oriented middleware, which is being proposed today to better support event-driven interactions. A key requirement for CEP engines is low latency processing, even in presence of complex rules and large numbers of incoming events. In this paper, we investigate how parallel hardware may speed up CEP processing. In particular, we consider the most common operators offered by existing rule languages (i.e., sequences, parameters, and aggregates); we consider different algorithms to process rules built using such operators; and we discuss how they can be implemented on a multi-core CPU and on CUDA, a widespread architecture for general-purpose programming on GPUs. Our analysis shows that the use of GPUs can bring impressive speedups in the presence of complex rules. On the other hand, it shows that multi-core CPUs scale better with the number of rules. Our conclusion is that an advanced CEP engine should leverage a multi-core CPU for processing the simplest rules, using the GPU as a coprocessor devoted to process the most complex ones.


distributed event-based systems | 2014

Learning from the past: automated rule generation for complex event processing

Alessandro Margara; Gianpaolo Cugola; Giordano Tamburrelli

Complex Event Processing (CEP) systems aim at processing large flows of events to discover situations of interest. In CEP, the processing takes place according to user-defined rules, which specify the (causal) relations between the observed events and the phenomena to be detected. We claim that the complexity of writing such rules is a limiting factor for the diffusion of CEP. In this paper, we tackle this problem by introducing iCEP, a novel framework that learns, from historical traces, the hidden causality between the received events and the situations to detect, and uses them to automatically generate CEP rules. The paper introduces three main contributions. It provides a precise definition for the problem of automated CEP rules generation. It dicusses a general approach to this research challenge that builds on three fundamental pillars: decomposition into subproblems, modularity of solutions, and ad-hoc learning algorithms. It provides a concrete implementation of this approach, the iCEP framework, and evaluates its precision in a broad range of situations, using both synthetic benchmarks and real traces from a traffic monitoring scenario.


Computing | 2015

Introducing uncertainty in complex event processing: model, implementation, and validation

Gianpaolo Cugola; Alessandro Margara; Matteo Matteucci; Giordano Tamburrelli

Several application domains involve detecting complex situations and reacting to them. This asks for a Complex Event Processing (CEP) engine specifically designed to timely process low level event notifications to identify higher level composite events according to a set of user-defined rules. Several CEP engines and accompanying rule languages have been proposed. Their primary focus on performance often led to an oversimplified modeling of the external world where events happens, which is not suited to satisfy the demand of real-life applications. In particular, they are unable to consider, model, and propagate the uncertainty that exists in most scenarios. Moving from this premise, we present CEP2U (Complex Event Processing under Uncertainty), a novel model for dealing with uncertainty in CEP. We apply CEP2U to an existing CEP language—TESLA—, showing how it seamlessly integrate with modern rule languages by supporting all the operators they commonly offer. Moreover, we implement CEP2U on top of the T-Rex CEP engine and perform a detailed study of its performance, measuring a limited overhead that demonstrates its practical applicability. The discussion presented in this paper, together with the experiments we conducted, show how CEP2U provides a valuable combination of expressiveness, efficiency, and ease of use.


adaptive and reflective middleware | 2009

RACED: an adaptive middleware for complex event detection

Gianpaolo Cugola; Alessandro Margara

While several event notification systems are built around a publish-subscribe communication infrastructure, the latter only supports detection of simple events. Complex events, involving several, related events, cannot be detected. To overcome this limitation, we designed RACED, an adaptive middleware, which extends the content-based publish-subscribe paradigm to provide a complex event detection service for large scale scenarios. In this paper we describe its main aspects: the event definition language; the protocol enabling efficient and distributed detection of complex events through a network of service brokers; the mechanism that enables RACED to dynamically adapt to network traffic. A preliminary evaluation shows the benefits of RACED w.r.t. more traditional publish-subscribe infrastructures.


distributed event-based systems | 2014

We have a DREAM: distributed reactive programming with consistency guarantees

Alessandro Margara; Guido Salvaneschi

The reactive programming paradigm has been proposed to simplify the development of reactive systems. It relies on programming primitives to express dependencies between data items and on runtime/middleware support for automated propagation of changes. Despite this paradigm is receiving increasing attention, defining the precise semantics and the consistency guarantees for reactive programming in distributed environments is an open research problem. This paper targets such problem by studying the consistency guarantees for the propagation of changes in a distributed reactive system. In particular, it introduces three propagation semantics, namely causal, glitch free, and atomic, providing different trade-offs between costs and guarantees. Furthermore, it describes how these semantics are concretely implemented in a Distributed REActice Middleware (DREAM), which exploits a distributed event-based dispatching system to propagate changes. We compare the performance of DREAM in a wide range of scenarios. This allows us to study the overhead introduced by the different semantics in terms of network traffic and propagation delay and to assess the efficiency of DREAM in supporting distributed reactive systems.

Collaboration


Dive into the Alessandro Margara's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sebastian Frischbier

Technische Universität Darmstadt

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Guido Salvaneschi

Technische Universität Darmstadt

View shared research outputs
Top Co-Authors

Avatar

Tobias Freudenreich

Technische Universität Darmstadt

View shared research outputs
Top Co-Authors

Avatar

Henri E. Bal

VU University Amsterdam

View shared research outputs
Researchain Logo
Decentralizing Knowledge