Giovanni Saponaro
Instituto Superior Técnico
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Giovanni Saponaro.
ieee international conference on autonomous robot systems and competitions | 2014
Afonso Gonçalves; Giovanni Saponaro; Lorenzo Jamone; Alexandre Bernardino
Endowing artificial agents with the ability of predicting the consequences of their own actions and efficiently planning their behaviors based on such predictions is a fundamental challenge both in artificial intelligence and robotics. A computationally practical yet powerful way to model this knowledge, referred as objects affordances, is through probabilistic dependencies between actions, objects and effects: this allows to make inferences across these dependencies, such as i) predicting the effects of an action over an object, or ii) selecting the best action from a repertoire in order to obtain a desired effect over an object. We propose a probabilistic model capable of learning the mutual interaction between objects in complex tasks that involve manipulation, where one of the objects plays an active tool role while being grasped and used (e.g., a hammer) while another item is passively acted upon (e.g., a nail). We consider visual affordances, meaning that we do not model object labels or categories; instead, we compute a set of visual features that represent geometrical properties (e.g., convexity, roundness), which allows to generalize previously-acquired knowledge to new objects. We describe an experiment in which a simulated humanoid robot learns an affordance model by autonomously exploring different actions with the objects present in a playground scenario. We report results showing that the robot is able to i) learn meaningful relationships between actions, tools, other objects and effects, and to ii) exploit the acquired knowledge to make predictions and take optimal decisions.
collaboration technologies and systems | 2013
Giovanni Saponaro; Giampiero Salvi; Alexandre Bernardino
In this paper, we propose a method to recognize human body movements and we combine it with the contextual knowledge of human-robot collaboration scenarios provided by an object affordances framework that associates actions with its effects and the objects involved in them. The aim is to equip humanoid robots with action prediction capabilities, allowing them to anticipate effects as soon as a human partner starts performing a physical action, thus enabling interactions between man and robot to be fast and natural. We consider simple actions that characterize a human-robot collaboration scenario with objects being manipulated on a table: inspired from automatic speech recognition techniques, we train a statistical gesture model in order to recognize those physical gestures in real time. Analogies and differences between the two domains are discussed, highlighting the requirements of an automatic gesture recognizer for robots in order to perform robustly and in real time.
international conference on robotics and automation | 2016
Alexandre Antunes; Lorenzo Jamone; Giovanni Saponaro; Alexandre Bernardino; Rodrigo Ventura
This paper addresses the problem of having a robot executing motor tasks requested by a human through spoken language. Verbal instructions do not typically have a one-to-one mapping to robot actions, due to various reasons: economy of spoken language, e.g., one short instruction might indeed correspond to a complex sequence of robot actions, and details about action execution might be omitted; grounding, e.g., some actions might need to be added or adapted due to environmental contingencies; embodiment, e.g., a robot might have different means than the human ones to obtain the goals that the instruction refers to. We propose a general cognitive architecture to deal with these issues, based on three steps: i) language-based semantic reasoning on the instruction (high-level), ii) formulation of goals in robot symbols and probabilistic planning to achieve them (mid-level), iii) action execution (low-level). The description of the mid-level is the main focus of this paper. The robot plans are adapted to the current scenario, perceived in real-time and continuously updated, taking in consideration the robot capabilities, modeled through the concept of affordances: this allows for flexibility and creativity in the task execution. We showcase the performance of the proposed architecture with real world experiments using the iCub humanoid robot, also in the presence of unexpected events and action failures.
human-robot interaction | 2011
Giovanni Saponaro; Alexandre Bernardino
We propose a mechanism to communicate emotions to humans by using head, torso and arm movements of a humanoid robot, without exploiting its facial features. To this end, we build a library of pre-programmed robot movements and we ask people to attribute emotional scores to these initial movements. The answers are then used to fine-tune motion parameters with an active learning approach.
Proceedings of the Eleventh International Conference on Climbing and Walking Robots and the Support Technologies for Mobile Machines | 2008
Giovanni Saponaro; Alexandre Bernardino; Antonio Ruberti
This paper describes an approach for real-time preparation of grasping tasks, based on the low-order moments of the target’s shape on a stereo pair of images acquired by an active vision head. The objective is to estimate the 3D position and orientation of an object and of the robotic hand, by using computationally fast and independent software components. These measurements are then used for the two phases of a reaching task: (i) an initial phase whereby the robot positions its hand close to the target with an appropriate hand orientation, and (ii) a final phase where a precise hand-to-target positioning is performed using Position-Based Visual Servoing methods.
Grounding Language Understanding GLU2017 August 25, 2017, KTH Royal Institute of Technology, Stockholm, Sweden | 2017
Giovanni Saponaro; Lorenzo Jamone; Alexandre Bernardino; Giampiero Salvi
A growing field in robotics and Artificial Intelligence (AI) research is human-robot collaboration, whose target is to enable effective teamwork between humans and robots. However, in many situations human teams are still superior to human-robot teams, primarily because human teams can easily agree on a common goal with language, and the individual members observe each other effectively, leveraging their shared motor repertoire and sensorimotor resources. This paper shows that for cognitive robots it is possible, and indeed fruitful, to combine knowledge acquired from interacting with elements of the environment (affordance exploration) with the probabilistic observation of another agents actions. We propose a model that unites (i) learning robot affordances and word descriptions with (ii) statistical recognition of human gestures with vision sensors. We discuss theoretical motivations, possible implementations, and we show initial results which highlight that, after having acquired knowledge of its surrounding environment, a humanoid robot can generalize this knowledge to the case when it observes another agent (human partner) performing the same motor actions previously executed during training.
Development and Learning and Epigenetic Robotics (ICDL-Epirob), 2014 Joint IEEE International Conferences on | 2014
Afonso Gonçalves; João Abrantes; Giovanni Saponaro; Lorenzo Jamone; Alexandre Bernardino
joint ieee international conference on development and learning and epigenetic robotics | 2017
Giovanni Saponaro; Pedro Vicente; Atabak Dehban; Lorenzo Jamone; Alexandre Bernardino; José Santos-Victor
joint ieee international conference on development and learning and epigenetic robotics | 2017
Alexandre Antunes; Giovanni Saponaro; Anthony F. Morse; Lorenzo Jamone; José Santos-Victor; Angelo Cangelosi
AIRO@AI*IA | 2015
Lorenzo Jamone; Giovanni Saponaro; Alexandre Antunes; Rodrigo Ventura; Alexandre Bernardino; José Santos-Victor