Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Roberto Capobianco is active.

Publication


Featured researches published by Roberto Capobianco.


international conference on advanced robotics | 2013

On-line semantic mapping

Emanuele Bastianelli; Domenico Daniele Bloisi; Roberto Capobianco; Fabrizio Cossu; Guglielmo Gemignani; Luca Iocchi; Daniele Nardi

Human Robot Interaction is a key enabling feature to support the introduction of robots in everyday environments. However, robots are currently incapable of building representations of the environments that allow both for the execution of complex tasks and for an easy interaction with the user requesting them. In this paper, we focus on semantic mapping, namely the problem of building a representation of the environment that combines metric and symbolic information about the elements of the environment and the objects therein. Specifically, we extend previous approaches, by enabling on-line semantic mapping, that permits to add to the representation elements acquired through a long term interaction with the user. The proposed approach has been experimentally validated on different kinds of environments, several users, and multiple robotic platforms.


international symposium on experimental robotics | 2016

Improved Learning of Dynamics Models for Control

Arun Venkatraman; Roberto Capobianco; Lerrel Pinto; Martial Hebert; Daniele Nardi; J. Andrew Bagnell

Model-based reinforcement learning (MBRL) plays an important role in developing control strategies for robotic systems. However, when dealing with complex platforms, it is difficult to model systems dynamics with analytic models. While data-driven tools offer an alternative to tackle this problem, collecting data on physical systems is non-trivial. Hence, smart solutions are required to effectively learn dynamics models with small amount of examples. In this paper we present an extension to Data As Demonstrator for handling controlled dynamics in order to improve the multiple-step prediction capabilities of the learned dynamics models. Results show the efficacy of our algorithm in developing LQR, iLQR, and open-loop trajectory-based control strategies on simulated benchmarks as well as physical robot platforms.


international conference on intelligent autonomous systems | 2016

Automatic Extraction of Structural Representations of Environments

Roberto Capobianco; Guglielmo Gemignani; Domenico Daniele Bloisi; Daniele Nardi; Luca Iocchi

Robots need a suitable representation of the surrounding world to operate in a structured but dynamic environment. State-of-the-art approaches usually rely on a combination of metric and topological maps and require an expert to provide the knowledge to the robot in a suitable format. Therefore, additional symbolic knowledge cannot be easily added to the representation in an incremental manner. This work deals with the problem of effectively binding together the high-level semantic information with the low-level knowledge represented in the metric map by introducing an intermediate grid-based representation. In order to demonstrate its effectiveness, the proposed approach has been experimentally validated on different kinds of environments.


european conference on mobile robots | 2015

A proposal for semantic map representation and evaluation

Roberto Capobianco; Jacopo Serafin; Johann Dichtl; Giorgio Grisetti; Luca Iocchi; Daniele Nardi

Semantic mapping is the incremental process of “mapping” relevant information of the world (i.e., spatial information, temporal events, agents and actions) to a formal description supported by a reasoning engine. Current research focuses on learning the semantic of environments based on their spatial location, geometry and appearance. Many methods to tackle this problem have been proposed, but the lack of a uniform representation, as well as standard benchmarking suites, prevents their direct comparison. In this paper, we propose a standardization in the representation of semantic maps, by defining an easily extensible formalism to be used on top of metric maps of the environments. Based on this, we describe the procedure to build a dataset (based on real sensor data) for benchmarking semantic mapping techniques, also hypothesizing some possible evaluation metrics. Nevertheless, by providing a tool for the construction of a semantic map ground truth, we aim at the contribution of the scientific community in acquiring data for populating the dataset.


Robotics and Autonomous Systems | 2016

Living with robots

Guglielmo Gemignani; Roberto Capobianco; Emanuele Bastianelli; Domenico Daniele Bloisi; Luca Iocchi; Daniele Nardi

Robots, in order to properly interact with people and effectively perform the requested tasks, should have a deep and specific knowledge of the environment they live in. Current capabilities of robotic platforms in understanding the surrounding environment and the assigned tasks are limited, despite the recent progress in robotic perception. Moreover, novel improvements in human-robot interaction support the view that robots should be regarded as intelligent agents that can request the help of the user to improve their knowledge and performance.In this paper, we present a novel approach to semantic mapping. Instead of requiring our robots to autonomously learn every possible aspect of the environment, we propose a shift in perspective, allowing non-expert users to shape robot knowledge through human-robot interaction. Thus, we present a fully operational prototype system that is able to incrementally and on-line build a rich and specific representation of the environment. Such a novel representation combines the metric information needed for navigation tasks with the symbolic information that conveys meaning to the elements of the environment and the objects therein. Thanks to such a representation, we are able to exploit multiple AI techniques to solve spatial referring expressions and support task execution. The proposed approach has been experimentally validated on different kinds of environments, by several users, and on multiple robotic platforms. A method for incremental and on-line semantic mapping based on HRI.A four-layered representation for semantic maps used to support robot task execution.Throughout description and evaluation of a fully semantic mapping system.


international symposium on experimental robotics | 2016

Interactive Semantic Mapping: Experimental Evaluation

Guglielmo Gemignani; Daniele Nardi; Domenico Daniele Bloisi; Roberto Capobianco; Luca Iocchi

Robots that are launched in the consumer market need to provide more effective human robot interaction, and, in particular, spoken language interfaces. However, in order to support the execution of high level commands as they are specified in natural language, a semantic map is required. Such a map is a representation that enables the robot to ground the commands into the actual places and objects located in the environment. In this paper, we present the experimental evaluation of a system specifically designed to build semantically rich maps, through the interaction with the user. The results of the experiments not only provide the basis for a discussion of the features of the proposed approach, but also highlight the manifold issues that arise in the evaluation of semantic mapping.


arXiv: Robotics | 2016

STAM: A Framework for Spatio-Temporal Affordance Maps

Francesco Riccio; Roberto Capobianco; Marc Hanheide; Daniele Nardi

Affordances have been introduced in literature as action opportunities that objects offer, and used in robotics to semantically represent their interconnection. However, when considering an environment instead of an object, the problem becomes more complex due to the dynamism of its state. To tackle this issue, we introduce the concept of Spatio-Temporal Affordances STA and Spatio-Temporal Affordance Map STAM. Using this formalism, we encode action semantics related to the environment to improve task execution capabilities of an autonomous robot. We experimentally validate our approach to support the execution of robot tasks by showing that affordances encode accurate semantics of the environment.


congress of the italian association for artificial intelligence | 2015

Approaching Qualitative Spatial Reasoning About Distances and Directions in Robotics

Guglielmo Gemignani; Roberto Capobianco; Daniele Nardi

One of the long-term goals of our society is to build robots able to live side by side with humans. In order to do so, robots need to be able to reason in a qualitative way. To this end, over the last years, the Artificial Intelligence research community has developed a considerable amount of qualitative reasoners. The majority of such approaches, however, has been developed under the assumption that suitable representations of the world were available. In this paper, we propose a method for performing qualitative spatial reasoning in robotics on abstract representations of environments, automatically extracted from metric maps. Both the representation and the reasoner are used to perform the grounding of commands vocally given by the user. The approach has been verified on a real robot interacting with several non-expert users.


ieee-ras international conference on humanoid robots | 2016

Learning human-robot handovers through π-STAM: Policy improvement with spatio-temporal affordance maps

Francesco Riccio; Roberto Capobianco; Daniele Nardi

Human-robot handovers are characterized by high uncertainty and poor structure of the problem that make them difficult tasks. While machine learning methods have shown promising results, their application to problems with large state dimensionality, such as in the case of humanoid robots, is still limited. Additionally, by using these methods and during the interaction with the human operator, no guarantees can be obtained on the correct interpretation of spatial constraints (e.g., from social rules). In this paper, we present Policy Improvement with Spatio-Temporal Affordance Maps - π-STAM, a novel iterative algorithm to learn spatial affordances and generate robot behaviors. Our goal consists in generating a policy that adapts to the unknown action semantics by using affordances. In this way, while learning to perform a human-robot handover task, we can (1) efficiently generate good policies with few training episodes, and (2) easily encode action semantics and, if available, enforce prior knowledge in it. We experimentally validate our approach both in simulation and on a real NAO robot whose task consists in taking an object from the hands of a human. The obtained results show that our algorithm obtains a good policy while reducing the computational load and time duration of the learning process.


international conference on logic programming | 2013

Knowledge Representation for Robots through Human-Robot Interaction

Emanuele Bastianelli; Domenico Daniele Bloisi; Roberto Capobianco; Guglielmo Gemignani; Luca Iocchi; Daniele Nardi

Collaboration


Dive into the Roberto Capobianco's collaboration.

Top Co-Authors

Avatar

Daniele Nardi

Sapienza University of Rome

View shared research outputs
Top Co-Authors

Avatar

Luca Iocchi

Sapienza University of Rome

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Francesco Riccio

Sapienza University of Rome

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Emanuele Bastianelli

University of Rome Tor Vergata

View shared research outputs
Top Co-Authors

Avatar

J. Andrew Bagnell

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Pedro U. Lima

Instituto Superior Técnico

View shared research outputs
Top Co-Authors

Avatar

Arun Venkatraman

Carnegie Mellon University

View shared research outputs
Researchain Logo
Decentralizing Knowledge