Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sara Tonelli is active.

Publication


Featured researches published by Sara Tonelli.


ieee international conference semantic computing | 2011

Boosting Collaborative Ontology Building with Key-Concept Extraction

Sara Tonelli; Marco Rospocher; Emanuele Pianta; Luciano Serafini

We present a wiki-based collaborative environment for the semi-automatic incremental building of ontologies. The system relies on an existing platform, which has been extended with a component for terminology extraction from domain-specific textual corpora and with a further step aimed at matching the extracted concepts with pre-existing structured and semi-structured information. The system stands on the shoulders of a well-established user-friendly wiki architecture and it enables knowledge engineers and domain experts to collaborate in the ontology building process. We have performed a task-oriented evaluation of the tool in a real use case for incrementally constructing the missing part of an environmental ontology. The tool effectively supported the users in the task, thus showing its usefulness for knowledge extraction and ontology engineering.


international conference on computational linguistics | 2013

ERNESTA: a sentence simplification tool for children's stories in italian

Gianni Barlacchi; Sara Tonelli

We present ERNESTA (Enhanced Readability through a Novel Event-based Simplification Tool), the first sentence simplification system for Italian, specifically developed to improve the comprehension of factual events in stories for children with low reading skills. The system performs two basic actions: First, it analyzes a text by resolving anaphoras (including null pronouns), so as to make all implicit information explicit. Then, it simplifies the story sentence by sentence at syntactic level, by producing simple statements in the present tense on the factual events described in the story. Our simplification strategy is driven by psycholinguistic principles and targets children aged 7 - 11 with text comprehension difficulties. The evaluation shows that our approach achieves promising results. Furthermore, ERNESTA could be exploited in different tasks, for instance in the generation of educational games and reading comprehension tests.


meeting of the association for computational linguistics | 2007

Entailment and Anaphora Resolution in RTE3

Rodolfo Delmonte; Antonella Bristot; Marco Aldo Piccolino Boniforti; Sara Tonelli

We present VENSES, a linguistically-based approach for semantic inference which is built around a neat division of labour between two main components: a grammatically-driven subsystem which is responsible for the level of predicate-arguments well-formedness and works on the output of a deep parser that produces augmented head-dependency structures. A second subsystem fires allowed logical and lexical inferences on the basis of different types of structural transformations intended to produce a semantically valid meaning correspondence. In the current challenge, we produced a new version of the system, where we do away with grammatical relations and only use semantic roles to generate weighted scores. We also added a number of additional modules to cope with fine-grained inferential triggers which were not present in previous dataset. Different levels of argumenthood have been devised in order to cope with semantic uncertainty generated by nearly-inferrable Text-Hypothesis pairs where the interpretation needs reasoning. RTE3 has introduced texts of paragraph length: in turn this has prompted us to upgrade VENSES by the addition of a discourse level anaphora resolution module, which is paramount to allow entailment in pairs where the relevant portion of text contains pronominal expressions. We present the system, its relevance to the task at hand and an evaluation.


acm symposium on applied computing | 2012

A novel Framenet-based resource for the semantic web

Volha Bryl; Sara Tonelli; Claudio Giuliano; Luciano Serafini

FrameNet is a large-scale lexical resource encoding information about semantic frames (situations) and semantic roles. The aim of the paper is to enrich FrameNet by mapping the lexical fillers of semantic roles to WordNet using a Wikipedia-based detour. The applied methodology relies on a word sense disambiguation step, in which a Wikipedia page is assigned to a role filler, and then BabelNet and YAGO are used to acquire WordNet synsets for a filler. We show how to represent the acquired resource in OWL, linking it to the existing RDF/OWL representations of FrameNet and WordNet. Part of the resource is evaluated by matching it with the WordNet synsets manually assigned by FrameNet lexicographers to a subset of semantic roles.


CLIMA XIV Proceedings of the 14th International Workshop on Computational Logic in Multi-Agent Systems - Volume 8143 | 2013

From Discourse Analysis to Argumentation Schemes and Back: Relations and Differences

Elena Cabrio; Sara Tonelli; Serena Villata

In argumentation theory, argumentation schemes are abstract argument forms expressed in natural language, commonly used in everyday conversational argumentation. In computational linguistics, discourse analysis have been conducted to identify the discourse structure of connected text, i.e. the nature of the discourse relationships between sentences. In this paper, we propose to couple these two research lines in order to i use the discourse relationships to automatically detect the argumentation schemes in natural language text, and ii use argumentation schemes to reason over natural language arguments composed by premises and a conclusion. In particular, we analyze how argumentation schemes fit into the discourse relations in the Penn Discourse Treebank and which are the argumentation schemes which emerge from this natural language corpus.


IWCS-8 '09 Proceedings of the Eighth International Conference on Computational Semantics | 2009

A novel approach to mapping FrameNet lexical units to WordNet synsets

Sara Tonelli; Emanuele Pianta

In this paper we present a novel approach to mapping FrameNet lexical units to WordNet synsets in order to automatically enrich the lexical unit set of a given frame. While the mapping approaches proposed in the past mainly rely on the semantic similarity between lexical units in a frame and lemmas in a synset, we exploit the definition of the lexical entries in FrameNet and the WordNet glosses to find the best candidate synset(s) for the mapping. Evaluation results are also reported and discussed.


Artificial Intelligence | 2013

Wikipedia-based WSD for multilingual frame annotation

Sara Tonelli; Claudio Giuliano; Kateryna Tymoshenko

Many applications in the context of natural language processing have been proven to achieve a significant performance when exploiting semantic information extracted from high-quality annotated resources. However, the practical use of such resources is often biased by their limited coverage. Furthermore, they are generally available only for English and few other languages. We propose a novel methodology that, starting from the mapping between FrameNet lexical units and Wikipedia pages, automatically leverages from Wikipedia new lexical units and example sentences. The goal is to build a reference data set for the semi-automatic development of new FrameNets. In addition, this methodology can be adapted to perform frame identification in any language available in Wikipedia. Our approach relies on a state-of-the-art word sense disambiguation system that is first trained on English Wikipedia to assign a page to the lexical units in a frame. Then, this mapping is further exploited to perform frame identification in English or in any other language available in Wikipedia. Our approach shows a high potential in multilingual settings, because it can be applied to languages for which other lexical resources such as WordNet or thesauri are not available.


Knowledge Based Systems | 2016

ALCIDE: Extracting and visualising content from large document collections to support humanities studies

Giovanni Moretti; Rachele Sprugnoli; Stefano Menini; Sara Tonelli

Abstract The application of research practices and methodologies from the Information and Communication Technologies to Humanities studies is having a great impact on the way humanities research is being conducted. However, although many applications have been developed to automatically analyse document collections from the historical or the literary domain, they often fail to provide a real support to scholars because of their inherent complexity: technical skills are often required to use them and to inspect their output. On the other hand, some systems are more user-friendly, but present basic analyses and are limited to the needs of a specific research community. In order to overcome the aforementioned limitations, we developed ALCIDE ( Analysis of Language and Content In a Digital Environment ), a web-based platform designed to assist humanities scholars in navigating and analysing large quantities of textual data such as historical sources and literary works. This suite of tools combines advanced text processing techniques with intuitive visualisations of the output to serve a broad range of research questions, which no other comparable tool can address in a single platform. Textual corpora can be inspected and compared along five semantic dimensions: who, where, when, what and how. Such dimensions in different combinations allow targeting many key questions of different humanities disciplines, as shown in the five use cases presented.


artificial intelligence applications and innovations | 2012

Personalized Environmental Service Orchestration for Quality of Life Improvement

Leo Wanner; Stefanos Vrochidis; Marco Rospocher; Jürgen Moßgraber; Harald Bosch; Ari Karppinen; Maria Myllynen; Sara Tonelli; Nadjet Bouayad-Agha; Gerard Casamayor; Thomas Ertl; Désirée Hilbring; Lasse Johansson; Kostas D. Karatzas; Ioannis Kompatsiaris; Tarja Koskentalo; Simon Mille; Anastasia Moumtzidou; Emanuele Pianta; Luciano Serafini; V. Tarvainen

Environmental and meteorological conditions are of utmost importance for the population, as they are strongly related to the quality of life. Citizens are increasingly aware of this importance. This awareness results in an increasing demand for environmental information tailored to their specific needs and background. We present an environmental information platform that supports submission of user queries related to environmental conditions and orchestrates results from complementary services to generate personalized suggestions. From the technical viewpoint, the system discovers and processes reliable data in the web in order to convert them into knowledge. At run time, this information is transferred into an ontology-structured knowledge base, from which then information relevant to the specific user is deduced and communicated in the language of their preference. The platform is demonstrated with real world use cases in the south area of Finland showing the impact it can have on the quality of everyday life.


Applied Ontology | 2012

Corpus-based terminological evaluation of ontologies

Marco Rospocher; Sara Tonelli; Luciano Serafini; Emanuele Pianta

We present a novel system for corpus-based terminological evaluation of ontologies. Starting from the assumption that a domain of interest can be represented through a corpus of text documents, we first extract a list of domain-specific key-concepts from the corpus, rank them by relevance, and then apply various evaluation metrics to assess the terminological coverage of a domain ontology with respect to the list of key-concepts.Among the advantages of the proposed approach, we remark that the framework is highly automatizable, requiring little human intervention. The evaluation framework is made available online through a collaborative wiki-based system, which can be accessed by different users, from domain experts to knowledge engineers.We performed a comprehensive experimental analysis of our approach, showing that the proposed ontology metrics allow for assessing the terminological coverage of an ontology with respect to a given domain, and that our framework can be effectively applied to many evaluation-related scenarios.

Collaboration


Dive into the Sara Tonelli's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Rodolfo Delmonte

Ca' Foscari University of Venice

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Antonella Bristot

Ca' Foscari University of Venice

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Stefano Menini

fondazione bruno kessler

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge