Carlo Allocca
Open University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Carlo Allocca.
international joint conference on knowledge discovery, knowledge engineering and knowledge management | 2009
Carlo Allocca; Mathieu d’Aquin; Enrico Motta
In the context of Semantic Web Search Engines is becoming crucial to study relations between ontologies to improve the ontology selection task. In this paper, we describe DOOR - The Descriptive Ontology of Ontology Relations, to represent, manipulate and reason upon relations between ontologies in large ontology repositories. DOOR represents a first attempt in describing and formalizing ontology relations. In fact, it does not pretend to be a universal standard structure. Rather, It is intended to be a flexible, easily modifiable structure to model ontology relations in the context of ontology repositories. Here, we provide a detailed description of the methodology used to design the DOOR ontology, as well as an overview of its content. We also describe how DOOR is used in a complete framework (called KANNEL) for detecting and managing semantic relations between ontologies in large ontology repositories. Applied in the context of a large collection of automatically crawled ontologies, DOOR and KANNEL provide a starting point for analyzing the underlying structure of the network of ontologies that is the Semantic Web.
extended semantic web conference | 2011
Carlo Allocca
When different versions of an ontology are published online, the links between them are often lost as the standard mechanisms (such as owl:versionInfo and owl:priorVersion) to expose these links are rarely used. This generates issues in scenarios where people or applications are required to make use of large scale, heterogenous ontology collections, implicitly containing multiple versions of ontologies. In this paper, we propose a method to detect automatically versioning links between ontologies which are available online through a Semantic Web search engine. Our approach is based on two main steps. The first step selects candidate pairs of ontologies by using versioning information expressed in their identifiers. In the second step, these candidate pairs are characterized through a set of features, including similarity measures, and classified by using Machine Learning Techniques, to distinguish the pairs that represent versions from the ones that do not.We discuss the features used, the methodology employed to train the classifiers and the precision obtained when applying this approach on the collection of ontologies of the Watson Semantic Web search engine.
International Journal on Digital Libraries | 2018
Alessandro Adamou; Simon Brown; Helen Barlow; Carlo Allocca; Mathieu d’Aquin
Research has approached the practice of musical reception in a multitude of ways, such as the analysis of professional critique, sales figures and psychological processes activated by the act of listening. Studies in the Humanities, on the other hand, have been hindered by the lack of structured evidence of actual experiences of listening as reported by the listeners themselves, a concern that was voiced since the early Web era. It was however assumed that such evidence existed, albeit in pure textual form, but could not be leveraged until it was digitised and aggregated. The Listening Experience Database (LED) responds to this research need by providing a centralised hub for evidence of listening in the literature. Not only does LED support search and reuse across nearly 10,000 records, but it also provides machine-readable structured data of the knowledge around the contexts of listening. To take advantage of the mass of formal knowledge that already exists on the Web concerning these contexts, the entire framework adopts Linked Data principles and technologies. This also allows LED to directly reuse open data from the British Library for the source documentation that is already published. Reused data are re-published as open data with enhancements obtained by expanding over the model of the original data, such as the partitioning of published books and collections into individual stand-alone documents. The database was populated through crowdsourcing and seamlessly incorporates data reuse from the very early data entry phases. As the sources of the evidence often contain vague, fragmentary of uncertain information, facilities were put in place to generate structured data out of such fuzziness. Alongside elaborating on these functionalities, this article provides insights into the most recent features of the latest instalment of the dataset and portal, such as the interlinking with the MusicBrainz database, the relaxation of geographical input constraints through text mining, and the plotting of key locations in an interactive geographical browser.
international conference on semantic systems | 2017
Alessandro Adamou; Mathieu d'Aquin; Carlo Allocca; Enrico Motta
Virtual data integration takes place at query execution time and relies on transformations of the original query to many target endpoints, where the data reside. In systems that integrate many data sources, this means maintaining many mappings, queries and query templates, as well as possibly issuing separate queries for linking entities in the datasets and retrieving their data. We propose a practical approach to keeping such complexity under control, which manipulates the translation from one client query to many target queries. The method performs just-in-time recompilation of the client query into elements that are combined with a query template into the target queries for multiple sources. It was validated in a setting with a custom star-shaped query language as client API and SPARQL endpoints as sources. The approach has shown to reduce the number of target queries to issue and of query templates to maintain, using a number of compiler functions that scales with the complexity of the data source, with an overhead that may be neglected where the method is most effective.
international semantic web conference | 2016
Carlo Allocca; Alessandro Adamou; Mathieu d’Aquin; Enrico Motta
In this demo paper, a SPARQL Query Recommendation Tool (called SQUIRE) based on query reformulation is presented. Based on three steps, Generalization, Specialization and Evaluation, SQUIRE implements the logic of reformulating a SPARQL query that is satisfiable w.r.t a source RDF dataset, into others that are satisfiable w.r.t a target RDF dataset. In contrast with existing approaches, SQUIRE aims at recommending queries whose reformulations: (i) reflect as much as possible the same intended meaning, structure, type of results and result size as the original query and (ii) do not require to have a mapping between the two datasets. Based on a set of criteria to measure the similarity between the initial query and the recommended ones, SQUIRE demonstrates the feasibility of the underlying query reformulation process, ranks appropriately the recommended queries, and offers a valuable support for query recommendations over an unknown and unmapped target RDF dataset, not only assisting the user in learning the data model and content of an RDF dataset, but also supporting its use without requiring the user to have intrinsic knowledge of the data.
international conference on knowledge engineering and ontology development | 2009
Carlo Allocca; Mathieu d'Aquin; Enrico Motta
Archive | 2009
Carlo Allocca; Mathieu d'Aquin; Enrico Motta
international semantic web conference | 2012
Mathieu d'Aquin; Carlo Allocca; Trevor Collins
Archive | 2010
Mathieu d'Aquin; Carlo Allocca; Enrico Motta
international semantic web conference | 2008
Carlo Allocca; Mathieu d'Aquin; Enrico Motta