Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Gianni Pantaleo is active.

Publication


Featured researches published by Gianni Pantaleo.


IEEE Transactions on Audio, Speech, and Language Processing | 2011

Automatic Transcription of Polyphonic Music Based on the Constant-Q Bispectral Analysis

Fabrizio Argenti; Paolo Nesi; Gianni Pantaleo

In the area of music information retrieval (MIR), automatic music transcription is considered one of the most challenging tasks, for which many different techniques have been proposed. This paper presents a new method for polyphonic music transcription: a system that aims at estimating pitch, onset times, durations, and intensity of concurrent sounds in audio recordings, played by one or more instruments. Pitch estimation is carried out by means of a front-end that jointly uses a constant-Q and a bispectral analysis of the input audio signal; subsequently, the processed signal is correlated with a fixed 2-D harmonic pattern. Onsets and durations detection procedures are based on the combination of the constant-Q bispectral analysis with information from the signal spectrogram. The detection process is agnostic and it does not need to take into account musicological and instrumental models or other a priori knowledge. The system has been validated against the standard Real-World Computing (RWC)-Classical Audio Database. The proposed method has demonstrated good performances in the multiple F0 tracking task, especially for piano-only automatic transcription at MIREX 2009.


ieee international conference on smart city socialcom sustaincom | 2015

A Smart Decision Support System for Smart City

Marco Bartolozzi; Pierfrancesco Bellini; Paolo Nesi; Gianni Pantaleo; Luca Santi

Smart City frameworks address new challenges to improve efficiency and sustainability of services for citizens, providing additional features and allowing the city environment to adaptively configure according to collected data and information. To this aim, Decision Support Systems, DSS, have recently been acquiring increasing importance in such a context. This paper presents a Smart Decision Support System for Smart City, based on the evolution of the Analytical Hierarchical Process model, which has been integrated with the Italian Flag 3-values logic representation. Original contributes of the this work are (i) the integration of the hierarchical model and probabilistic values and their propagation in the decision tree, (ii) the capability integrating social and data processes by accessing and querying external repositories, to gather Smart City related data assisting decision makers, through the use of properly defined functions and thresholds, (iii) the system is designed as a collaborative framework, allowing multiple users to share, clone and modify models and different instances of a same model. The proposed system has been validated in real cases by exploiting decision processes on smart city services of Km4City solution in use in the Florence metropolitan area http://www.disit.org/km4city.


Engineering Applications of Artificial Intelligence | 2016

Geographical localization of web domains and organization addresses recognition by employing natural language processing, Pattern Matching and clustering

Paolo Nesi; Gianni Pantaleo; Marco Tenti

Nowadays, the World Wide Web is growing at increasing rate and speed, and consequently the online available resources populating Internet represent a large source of knowledge for various business and research interests. For instance, over the past years, increasing attention has been focused on retrieving information related to geographical location of places and entities, which is largely contained in web pages and documents. However, such resources are represented in a wide variety of generally unstructured formats, and this actually does not help final users to find desired information items. The automatic annotation and comprehension of toponyms, location names and addresses (at different resolution and granularity levels) can deliver significant benefits for the whole web community by improving search engines filtering capabilities and intelligent data mining systems. The present paper addresses the problem of gathering geographical information from unstructured text in web pages and documents. In the specific, the proposed method aims at extracting geographical location (at street number resolution) of commercial companies and services, by annotating geo-related information from their web domains. The annotation process is based on Natural Language Processing (NLP) techniques for text comprehension, and relies on Pattern Matching and Hierarchical Cluster Analysis for recognizing and disambiguating geographical entities. Geotagging performances have been assessed by evaluating Precision, Recall and F-Measure of the proposed system output (represented in form of semantic RDF triples) against both a geo-annotated reference database and a semantic Smart City repository.


International Journal of Software Engineering and Knowledge Engineering | 2012

ASSISTED KNOWLEDGE BASE GENERATION, MANAGEMENT AND COMPETENCE RETRIEVAL

Andrea Bellandi; Pierfrancesco Bellini; Antonio Cappuccio; Paolo Nesi; Gianni Pantaleo; Nadia Rauch

Despite the presence of many systems for developing and managing structured taxonomies and/or SKOS models for a given domain for which small documents set are accessible, the production and maintenance of these domain knowledge bases is still a very expensive and time consuming process. This paper proposes a solution for assisting expert users in the development and management of knowledge base, including SKOS and ontologies modeling structures and relationships. The proposed solution accelerates the knowledge production by crawling and exploiting different kinds of sources (in multiple languages and with several inconsistencies among them). The proposed tool supports the experts in defining relationships among the most recurrent concepts, reducing the time to SKOS production and allowing assisted production. The validity of the produced knowledge base has been assessed by using SPARQL query interface and a precision and recall model. The results have demonstrated better performance with respect to the state of the art. The solution has been developed for Open Space Innovative Mind project, with the aim of creating a portal to allow industries at posing semantic queries to discover potential competences in a large institution such as the University of Florence, in which several distinct domains are associated with its own departments.


Journal of Visual Languages and Computing | 2015

A hadoop based platform for natural language processing of web pages and documents

Paolo Nesi; Gianni Pantaleo; Gianmarco Sanesi

Abstract The rapid and extensive pervasion of information through the web has enhanced the diffusion of a huge amount of unstructured natural language textual resources. A great interest has arisen in the last decade for discovering, accessing and sharing such a vast source of knowledge. For this reason, processing very large data volumes in a reasonable time frame is becoming a major challenge and a crucial requirement for many commercial and research fields. Distributed systems, computer clusters and parallel computing paradigms have been increasingly applied in the recent years, since they introduced significant improvements for computing performance in data-intensive contexts, such as Big Data mining and analysis. Natural Language Processing, and particularly the tasks of text annotation and key feature extraction, is an application area with high computational requirements; therefore, these tasks can significantly benefit of parallel architectures. This paper presents a distributed framework for crawling web documents and running Natural Language Processing tasks in a parallel fashion. The system is based on the Apache Hadoop ecosystem and its parallel programming paradigm, called MapReduce. In the specific, we implemented a MapReduce adaptation of a GATE application and framework (a widely used open source tool for text engineering and NLP). A validation is also offered in using the solution for extracting keywords and keyphrase from web documents in a multi-node Hadoop cluster. Evaluation of performance scalability has been conducted against a real corpus of web pages and documents.


2014 9th International Workshop on Semantic and Social Media Adaptation and Personalization | 2014

Ge(o)Lo(cator): Geographic Information Extraction from Unstructured Text Data and Web Documents

Paolo Nesi; Gianni Pantaleo; Marco Tenti

The constantly growing number of websites, web pages, documents and, textual (Big) Data populating the Internet currently represents a massive resource of information and knowledge for various interests and across many different domains. However, the big amount and the complexity of unstructured, natural language textual data implies several issues and difficulties for end users to find a specific, desired pieces of information. In the era of maximum uptake of social networks and media, automatic extraction and retrieval of geographic information is becoming a field of large interest. In this paper, the GeLo system for extracting addresses and geographical coordinates of companies and organizations from their web domains is presented. The information extraction process relies on NLP techniques, specifically Part-Of-Speech-tagging, pattern recognition and annotation. The overall system performances have been manually evaluated against a consistent subset of the extracted URLs database.


distributed multimedia systems | 2015

A Distributed Framework for NLP-Based Keyword and Keyphrase Extraction From Web Pages and Documents

Paolo Nesi; Gianni Pantaleo; Gianmarco Sanesi

The recent growth of the World Wide Web at increasing rate and speed and the number of online available resources populating Internet represent a massive source of knowledge for various research and business interests. Such knowledge is, for the most part, embedded in the textual content of web pages and documents, which is largely represented as unstructured natural language formats. In order to automatically ingest and process such huge amounts of data, single-machine, non-distributed architectures are proving to be inefficient for tasks like Big Data mining and intensive text processing and analysis. Current Natural Language Processing (NLP) systems are growing in complexity, and computational power needs have been significantly increased, requiring solutions such as distributed frameworks and parallel computing programming paradigms. This paper presents a distributed framework for executing NLP related tasks in a parallel environment. This has been achieved by integrating the APIs of the widespread GATE open source NLP platform in a multi-node cluster, built upon the open source Apache Hadoop file system. The proposed framework has been evaluated against a real corpus of web pages and documents.


Multimedia Tools and Applications | 2018

Predicting TV programme audience by using twitter based metrics

Alfonso Crisci; Valentina Grasso; Paolo Nesi; Gianni Pantaleo; Irene Paoli; Imad Zaza

The predictive capabilities of metrics based on Twitter data have been stressed in different fields: business, health, market, politics, etc. In specific cases, a deeper analysis is required to create useful metrics and models with predicting capabilities. In this paper, a set of metrics based on Twitter data have been identified and presented in order to predict the audience of scheduled television programmes, where the audience is highly involved such as it occurs with reality shows (i.e., X Factor and Pechino Express, in Italy). Identified suitable metrics are based on the volume of tweets, the distribution of linguistic elements, the volume of distinct users involved in tweeting, and the sentiment analysis of tweets. On this ground a number of predictive models have been identified and compared. The resulting method has been selected in the context of a validation and assessment by using real data, with the aim of building a flexible framework able to exploit the predicting capabilities of social media data. Further details are reported about the method adopted to build models which focus on the identification of predictors by their statistical significance. Experiments have been based on the collected Twitter data by using Twitter Vigilance platform, which is presented in this paper, as well.


PeerJ | 2016

Weather events identification in social media streams: tools to detect their evidence in Twitter

Valentina Grasso; Imad Zaza; Federica Zabini; Gianni Pantaleo; Paolo Nesi; Alfonso Crisci

Severe weather impact identification and monitoring through social media data is a good challenge for data science. In last years we assisted to an increase of weather related disasters, also due to climatic changes. Many works showed that during such events people tend to share messages by means of social media platforms, especially Twitter. Not only they contribute to ”situational” awareness, improving the dissemination of information during emergency, but can be used to assess social impact of crisis events. We present in this work preliminary findings concerning how temporal distribution of weather related messages may help the identification of severe events that impacted a community. Severe weather events are recognizable by observing the synchronization of Twitter activity volumes across keywords and hashtags, including geo-names. Impacting events present a recognizable visual pattern recalling a ”Half Onion Shape”, where Twitter activity across keywords is synchronized. In reason of these interesting indications, it’s becoming fundamental to have a suite of reliable tools to monitor social media data. For Twitter data a comprehensive suite of tools is presented: the DISIT-Twitter Vigilance Platform for Twitter data retrieve, management and visualization.


ieee international smart cities conference | 2016

Functional resonance analysis method based-decision support tool for urban transport system resilience management

Emanuele Bellini; Paolo Nesi; Gianni Pantaleo; Alessandro Venturi

Today, managing critical infrastructure resilience in smart city is a challenge that can be undertaken by adopting a new class of smart tools, which are able to integrate modeling capability with evidence driven decision support. The Resilience Decision Support tool, as presented in this article, is an innovative and powerful tool that aims at managing critical infrasctructure resilience through a more complex and expressive model based on the Functional Resonance Analysis Method and through the connection of such a model with a system thinking based decision support tool exploiting smart city data. Thanks to ResilienceDS, FRAM model becomes computable and the functional variability that is at the core of the resilience analysis can be quantified. Such quantification allows the decision support tool to compute specific strategies and recommendations for variability dampening at strategic, tactic and operational stage. The solution has been developed in the context of RESOLUTE H2020 project of the European Commission.

Collaboration


Dive into the Gianni Pantaleo's collaboration.

Top Co-Authors

Avatar

Paolo Nesi

University of Florence

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Imad Zaza

University of Florence

View shared research outputs
Top Co-Authors

Avatar

Alfonso Crisci

National Research Council

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Irene Paoli

University of Florence

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge