Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Emanuele Bastianelli is active.

Publication


Featured researches published by Emanuele Bastianelli.


international conference on advanced robotics | 2013

On-line semantic mapping

Emanuele Bastianelli; Domenico Daniele Bloisi; Roberto Capobianco; Fabrizio Cossu; Guglielmo Gemignani; Luca Iocchi; Daniele Nardi

Human Robot Interaction is a key enabling feature to support the introduction of robots in everyday environments. However, robots are currently incapable of building representations of the environments that allow both for the execution of complex tasks and for an easy interaction with the user requesting them. In this paper, we focus on semantic mapping, namely the problem of building a representation of the environment that combines metric and symbolic information about the elements of the environment and the objects therein. Specifically, we extend previous approaches, by enabling on-line semantic mapping, that permits to add to the representation elements acquired through a long term interaction with the user. The proposed approach has been experimentally validated on different kinds of environments, several users, and multiple robotic platforms.


IEEE Robotics & Automation Magazine | 2015

Competitions for Benchmarking: Task and Functionality Scoring Complete Performance Assessment

Francesco Amigoni; Emanuele Bastianelli; Jakob Berghofer; Andrea Bonarini; Giulio Fontana; Nico Hochgeschwender; Luca Iocchi; Gerhard K. Kraetzschmar; Pedro U. Lima; Matteo Matteucci; Pedro Miraldo; Daniele Nardi; Viola Schiaffonati

Scientific experiments and robotic competitions share some common traits that can put the debate about developing better experimental methodologies and replicability of results in robotics research on more solid ground. In this context, the Robot Competitions Kick Innovation in Cognitive Systems and Robotics (RoCKIn) project aims to develop competitions that come close to scientific experiments, providing an objective performance evaluation of robot systems under controlled and replicable conditions. In this article, by further articulating replicability into reproducibility and repeatability and by considering some results from the 2014 first RoCKIn competition, we show that the RoCKIn approach offers tools that enable the replicability of experimental results.


Intelligenza Artificiale | 2012

Structured learning for semantic role labeling

Danilo Croce; Giuseppe Castellucci; Emanuele Bastianelli

The use of complex grammatical features in statistical language learning assumes the availability of large scale training data and good quality parsers, especially for language different from English. In this paper, we show how good quality FrameNet SRL systems can be obtained, without relying on full syntactic parsing, by backing off to surface grammatical representations and structured learning. This model is here shown to achieve state-of-art results in standard benchmarks, while its robustness is confirmed in poor training conditions, for a language different for English, i.e. Italian. 1 Linguistic Features for Inductive Tasks Language learning systems usually generalize linguistic observations into statistical models of higher level semantic tasks, such as Semantic Role Labeling (SRL). Statistical learning methods assume that lexical or grammatical aspects of training data are the basic features for modeling the different inferences. They are then generalized into predictive patterns composing the final induced model. Lexical information captures semantic information and fine grained context dependent aspects of the input data. However, it is largely affected by data sparseness as lexical evidence is often poorly represented in training. It is also difficult to be generalized and non scalable, as the development large scale lexical KBs is very expensive. Moreover, other crucial properties, such as word ordering, are neglected by lexical representations, as syntax must be also properly addressed. In semantic role labeling, the role of grammatical features has been outlined since the seminal work by [6]. Symbolic expressions derived from the parse trees denote the position and the relationship between an argument and its predicate, and they are used as features. Parse tree paths are such features, employed in [11] for semantic role labeling. Tree kernels, introduced by [4], model similarity between two training examples as a function of the shared parts of their parse trees. Applied to different tasks, from parsing [4] to semantic role labeling [16], tree kernels determine expressive representations for effective grammatical feature engineering. However, there is no free lunch in the adoption of lexical and grammatical features in complex NLP tasks. First, lexical information is hard to be properly generalized whenever the amount of training data is small. Large scale general-purpose lexicons are available, but their employment in specific tasks is not satisfactory: coverage in domain (or corpus)-specific tasks is often poor and domain adaptation is difficult. For R. Pirrone and F. Sorbello (Eds.): AI*IA 2011, LNAI 6934, pp. 238–249, 2011. c


european conference on artificial intelligence | 2014

Effective and robust natural language understanding for human-robot interaction

Emanuele Bastianelli; Giuseppe Castellucci; Danilo Croce; Roberto Basili; Daniele Nardi

Robots are slowly becoming part of everyday life, as they are being marketed for commercial applications (viz. telepresence, cleaning or entertainment). Thus, the ability to interact with non-expert users is becoming a key requirement. Even if user utterances can be efficiently recognized and transcribed by Automatic Speech Recognition systems, several issues arise in translating them into suitable robotic actions. In this paper, we will discuss both approaches providing two existing Natural Language Understanding workflows for Human Robot Interaction. First, we discuss a grammar based approach: it is based on grammars thus recognizing a restricted set of commands. Then, a data driven approach, based on a free-from speech recognizer and a statistical semantic parser, is discussed. The main advantages of both approaches are discussed, also from an engineering perspective, i.e. considering the effort of realizing HRI systems, as well as their reusability and robustness. An empirical evaluation of the proposed approaches is carried out on several datasets, in order to understand performances and identify possible improvements towards the design of NLP components in HRI.


congress of the italian association for artificial intelligence | 2013

Kernel-Based Discriminative Re-ranking for Spoken Command Understanding in HRI

Roberto Basili; Emanuele Bastianelli; Giuseppe Castellucci; Daniele Nardi; Vittorio Perera

Speech recognition is being addressed as one of the key technologies for a natural interaction with robots, that are targeting in the consumer market. However, speech recognition in human-robot interaction is typically affected by noisy conditions of the operational environment, that impact on the performance of the recognition of spoken commands. Consequently, finite-state grammars or statistical language models even though they can be tailored to the target domain exhibit high rate of false positives or low accuracy. In this paper, a discriminative re-ranking method is applied to a simple speech and language processing cascade, based on off-the-shelf components in realistic conditions. Tree kernels are here applied to improve the accuracy of the recognition process by re-ranking the n-best list returned by the speech recognition component. The rationale behind our approach is to reduce the effort for devising domain dependent solutions in the design of speech interfaces for language processing in human-robot interactions.


Applied Intelligence | 2016

Speaky for robots: the development of vocal interfaces for robotic applications

Emanuele Bastianelli; Daniele Nardi; Luigia Carlucci Aiello; Fabrizio Giacomelli; Nicolamaria Manes

The currently available speech technologies on mobile devices achieve effective performance in terms of both reliability and the language they are able to capture. The availability of performant speech recognition engines may also support the deployment of vocal interfaces in consumer robots. However, the design and implementation of such interfaces still requires significant work. The language processing chain and the domain knowledge must be built for the specific features of the robotic platform, the deployment environment and the tasks to be performed. Hence, such interfaces are currently built in a completely ad hoc way. In this paper, we present a design methodology together with a support tool aiming to streamline and improve the implementation of dedicated vocal interfaces for robots. This work was developed within an experimental project called Speaky for Robots. We extend the existing vocal interface development framework to target robotic applications. The proposed solution is built using a bottom-up approach by refining the language processing chain through the development of vocal interfaces for different robotic platforms and domains. The proposed approach is validated both in experiments involving several research prototypes and in tests involving end-users.


Robotics and Autonomous Systems | 2016

Living with robots

Guglielmo Gemignani; Roberto Capobianco; Emanuele Bastianelli; Domenico Daniele Bloisi; Luca Iocchi; Daniele Nardi

Robots, in order to properly interact with people and effectively perform the requested tasks, should have a deep and specific knowledge of the environment they live in. Current capabilities of robotic platforms in understanding the surrounding environment and the assigned tasks are limited, despite the recent progress in robotic perception. Moreover, novel improvements in human-robot interaction support the view that robots should be regarded as intelligent agents that can request the help of the user to improve their knowledge and performance.In this paper, we present a novel approach to semantic mapping. Instead of requiring our robots to autonomously learn every possible aspect of the environment, we propose a shift in perspective, allowing non-expert users to shape robot knowledge through human-robot interaction. Thus, we present a fully operational prototype system that is able to incrementally and on-line build a rich and specific representation of the environment. Such a novel representation combines the metric information needed for navigation tasks with the symbolic information that conveys meaning to the elements of the environment and the objects therein. Thanks to such a representation, we are able to exploit multiple AI techniques to solve spatial referring expressions and support task execution. The proposed approach has been experimentally validated on different kinds of environments, by several users, and on multiple robotic platforms. A method for incremental and on-line semantic mapping based on HRI.A four-layered representation for semantic maps used to support robot task execution.Throughout description and evaluation of a fully semantic mapping system.


congress of the italian association for artificial intelligence | 2015

Using Semantic Models for Robust Natural Language Human Robot Interaction

Emanuele Bastianelli; Danilo Croce; Roberto Basili; Daniele Nardi

While robotic platforms are moving from industrial to consumer applications, the need of flexible and intuitive interfaces becomes more critical and the capability of governing the variability of human language a strict requirement. Grounding of lexical expressions, i.e. mapping words of a user utterance to the perceived entities of a robot operational scenario, is particularly critical. Usually, grounding proceeds by learning how to associate objects categorized in discrete classes (e.g. routes or sets of visual patterns) to linguistic expressions. In this work, we discuss how lexical mapping functions that integrate Distributional Semantics representations and phonetic metrics can be adopted to robustly automate the grounding of language expressions into the robotic semantic maps of a house environment. In this way, the pairing between words and objects into a semantic map facilitates the grounding without the need of an explicit categorization. Comparative measures demonstrate the viability of the proposed approach and the achievable robustness, quite crucial in operational robotic settings.


artificial general intelligence | 2013

Knowledgeable talking robots

Luigia Carlucci Aiello; Emanuele Bastianelli; Luca Iocchi; Daniele Nardi; Vittorio Perera; Gabriele Randelli

Speech technologies nowadays available on mobile devices show an increased performance both in terms of the language that they are able to capture and in terms of reliability. The availability of performant speech recognition engines suggests the deployment of vocal interfaces also in consumer robots. In this paper, we report on our current work, by specifically focussing on the difficulties that arise in grounding the users utterances in the environment where the robot is operating.


Polibits | 2016

Robust Spoken Language Understanding for House Service Robots

Andrea Vanzo; Danilo Croce; Emanuele Bastianelli; Roberto Basili; Daniele Nardi

Service robotics has been growing significantly in the last years, leading to several research results and to a number of consumer products. One of the essential features of these robotic platforms is represented by the ability of interacting with users through natural language. Spoken commands can be processed by a Spoken Language Understanding chain, in order to obtain the desired behavior of the robot. The entry point of such a process is represented by an Automatic Speech Recognition (ASR) module, that provides a list of transcriptions for a given spoken utterance. Although several well-performing ASR engines are available off-the-shelf, they operate in a general purpose setting. Hence, they may be not well suited in the recognition of utterances given to robots in specific domains. In this work, we propose a practical yet robust strategy to re-rank lists of transcriptions. This approach improves the quality of ASR systems in situated scenarios, i.e., the transcription of robotic commands. The proposed method relies upon evidences derived by a semantic grammar with semantic actions, designed to model typical commands expressed in scenarios that are specific to human service robotics. The outcomes obtained through an experimental evaluation show that the approach is able to effectively outperform the ASR baseline, obtained by selecting the first transcription suggested by the ASR.

Collaboration


Dive into the Emanuele Bastianelli's collaboration.

Top Co-Authors

Avatar

Daniele Nardi

Sapienza University of Rome

View shared research outputs
Top Co-Authors

Avatar

Danilo Croce

University of Rome Tor Vergata

View shared research outputs
Top Co-Authors

Avatar

Roberto Basili

University of Rome Tor Vergata

View shared research outputs
Top Co-Authors

Avatar

Luca Iocchi

Sapienza University of Rome

View shared research outputs
Top Co-Authors

Avatar

Giuseppe Castellucci

University of Rome Tor Vergata

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Roberto Capobianco

Sapienza University of Rome

View shared research outputs
Top Co-Authors

Avatar

Andrea Vanzo

Sapienza University of Rome

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge