Daniele Dell’Aglio
University of Zurich
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Daniele Dell’Aglio.
international semantic web conference | 2016
Andrea Mauri; Jean-Paul Calbimonte; Daniele Dell’Aglio; Marco Balduini; Marco Brambilla; Emanuele Della Valle; Karl Aberer
Processing data streams is increasingly gaining momentum, given the need to process these flows of information in real-time and at Web scale. In this context, RDF Stream Processing (RSP) and Stream Reasoning (SR) have emerged as solutions to combine semantic technologies with stream and event processing techniques. Research in these areas has proposed an ecosystem of solutions to query, reason and perform real-time processing over heterogeneous and distributed data streams on the Web. However, so far one basic building block has been missing: a mechanism to disseminate and exchange RDF streams on the Web. In this work we close this gap, proposing TripleWave, a reusable and generic tool that enables the publication of RDF streams on the Web. The features of TripleWave were selected based on requirements of real use-cases, and support a diverse set of scenarios, independent of any specific RSP implementation. TripleWave can be fed with existing Web streams (e.g. Twitter and Wikipedia streams) or time-annotated RDF datasets (e.g. the Linked Sensor Data dataset). It can be invoked through both pull- and push-based mechanisms, thus enabling RSP engines to automatically register and receive data from TripleWave.
discovery science | 2017
Daniele Dell’Aglio; Emanuele Della Valle; Frank van Harmelen; Abraham Bernstein; Tobias Kuhn
Stream reasoning studies the application of inference techniques to data characterised by being highly dynamic. It can find application in several settings, from Smart Cities to Industry 4.0, from Internet of Things to Social Media analytics. This year stream reasoning turns ten, and in this article we analyse its growth. In the first part, we trace the main results obtained so far, by presenting the most prominent studies. We start by an overview of the most relevant studies developed in the context of semantic web, and then we extend the analysis to include contributions from adjacent areas, such as database and artificial intelligence. Looking at the past is useful to prepare for the future: in the second part, we present a set of open challenges and issues that stream reasoning will face in the next future.
Data Management in Pervasive Systems | 2015
Daniele Dell’Aglio; Marco Balduini; Emanuele Della Valle
More and more applications require real-time fine-grained query answering on massive, heterogeneous, noisy and incomplete data streams. Indeed, systems capable of scalable stream processing exist. Specialised data stream management systems (DSMSs) and complex event processing (CEP) have been largely investigated in the 2000s. They can provide reactive fine-grained information access even in the presence of noisy data. What they lack is the ability to master heterogeneity and incompleteness. In this chapter, we show out to apply semantic interoperability principles to data streams. In particular, we described recently developed methods that use extensions of semantic Web technologies (i.e. RDF, SPARQL and OWL) to continuously answer fine-grained query on heterogeneous and incomplete data streams in a scalable manner. To make the chapter easier to follow, examples are provided in the context of sensor network and social media analytics.
extended semantic web conference | 2011
Irene Celino; Daniele Dell’Aglio
Simulation Learning is a frequent practice to conduct near-real, immersive and engaging training sessions. AI Planning and Scheduling systems are used to automatically create and supervise learning sessions; to this end, they need to manage a large amount of knowledge about the simulated situation, the learning objectives, the participants’ behaviour, etc.
international semantic web conference | 2016
Shen Gao; Daniele Dell’Aglio; Soheila Dehghanzadeh; Abraham Bernstein; Emanuele Della Valle; Alessandra Mileo
Data stream applications are becoming increasingly popular on the web. In these applications, one query pattern is especially prominent: a join between a continuous data stream and some background data (BGD). Oftentimes, the target BGD is large, maintained externally, changing slowly, and costly to query (both in terms of time and money). Hence, practical applications usually maintain a local (cached) view of the relevant BGD. Given that these caches are not updated as the original BGD, they should be refreshed under realistic budget constraints (in terms of latency, computation time, and possibly financial cost) to avoid stale data leading to wrong answers. This paper proposes to model the join between streams and the BGD as a bipartite graph. By exploiting the graph structure, we keep the quality of results good enough without refreshing the entire cache for each evaluation. We also introduce two extensions to this method: first, we consider a continuous join between recent portions of a data stream and some BGD to focus on updates that have the longest effect. Second, we consider the future impact of a query to the BGD by proposing to delay some updates to provide fresher answers in future. By extending an existing stream processor with the proposed policies, we empirically show that we can improve result freshness by 93 % over baseline algorithms such as Random Selection or Least Recently Updated.
international conference on web engineering | 2016
Shima Zahmatkesh; Emanuele Della Valle; Daniele Dell’Aglio
We are witnessing a growing interest for Web applications that (i) require to continuously combine highly dynamic data stream with background data and (ii) have reactivity as key performance indicator. The Semantic Web community showed that RDF Stream Processing (RSP) is an adequate framework to develop this type of applications.
international semantic web conference | 2018
Matthias R. Baumgartner; Wen Zhang; Bibek Paudel; Daniele Dell’Aglio; Huajun Chen; Abraham Bernstein
Knowledge Bases (KBs) and textual documents contain rich and complementary information about real-world objects, as well as relations among them. While text documents describe entities in freeform, KBs organizes such information in a structured way. This makes these two information representation forms hard to compare and integrate, limiting the possibility to use them jointly to improve predictive and analytical tasks. In this article, we study this problem, and we propose KADE, a solution based on a regularized multi-task learning of KB and document embeddings. KADE can potentially incorporate any KB and document embedding learning method. Our experiments on multiple datasets and methods show that KADE effectively aligns document and entities embeddings, while maintaining the characteristics of the embedding models.
international conference on web engineering | 2018
Shen Gao; Daniele Dell’Aglio; Jeff Z. Pan; Abraham Bernstein
Dealing with noisy data is one of the big issues in stream processing. While noise has been widely studied in settings where streams have simple schemas, e.g. time series, few solutions focused on streams characterized by complex data structures. This paper studies how to check consistency over large amounts of complex streams. Our proposed methods exploit reasoning to assess if portions of the streams are compliant to a reference conceptual model. To achieve scalability, our methods run on state-of-the-art distributed stream processing platforms, e.g. Apache Storm or Twitter Heron. Our first method computes the closure of Negative Inclusions (NIs) for DL-Lite ontologies and registers the NIs as queries. The second method compiles the ontology into a processing pipeline to evenly distribute the workload. Experiments compares the two methods and show that the second one improves the throughput up to 139% with the LUBM ontology and 330% with the NPD ontology.
european semantic web conference | 2018
Alessandro Margara; Gianpaolo Cugola; Dario Collavini; Daniele Dell’Aglio
Many ICT applications need to make sense of large volumes of streaming data to detect situations of interest and enable timely reactions. Stream Reasoning (SR) aims to combine the performance of stream/event processing and the reasoning expressiveness of knowledge representation systems by adopting Semantic Web standards to encode streaming elements. We argue that the mainstream SR model is not flexible enough to properly express the temporal relations common in many applications. We show that the model can miss relevant information and lead to inconsistent derivations. Moving from these premises, we introduce a novel SR model that provides expressive ontological and temporal reasoning by neatly decoupling their scope to avoid losses and inconsistencies. We implement the model in the DOTR system that defines ontological reasoning using Datalog rules and temporal reasoning using a Complex Event Processing language that builds on metric temporal logic. We demonstrate the expressiveness of our model through examples and benchmarks, and we show that DOTR outperforms state-of-the-art SR tools, processing data with millisecond latency.
international semantic web conference | 2017
Matt Dennis; Kees van Deemter; Daniele Dell’Aglio; Jeff Z. Pan
This paper explores whether Authoring Tests derived from Competency Questions accurately represent the expectations of ontology authors. In earlier work we proposed that an ontology authoring interface can be improved by allowing the interface to test whether a given Competency Question (CQ) is able to be answered by the ontology at a given stage of its construction, an approach known as CQ-driven Ontology Authoring (CQOA). The experiments presented in the present paper suggest that CQOA’s understanding of CQs matches users’ understanding quite well, especially for inexperienced ontology authors.