Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ilaria Gori is active.

Publication


Featured researches published by Ilaria Gori.


IEEE Transactions on Autonomous Mental Development | 2013

The Coordinating Role of Language in Real-Time Multimodal Learning of Cooperative Tasks

Maxime Petit; Stéphane Lallée; Jean-David Boucher; Grégoire Pointeau; Pierrick Cheminade; Dimitri Ognibene; Eris Chinellato; Ugo Pattacini; Ilaria Gori; Uriel Martinez-Hernandez; Hector Barron-Gonzalez; Martin Inderbitzin; Andre L. Luvizotto; Vicky Vouloutsi; Yiannis Demiris; Giorgio Metta; Peter Ford Dominey

One of the defining characteristics of human cognition is our outstanding capacity to cooperate. A central requirement for cooperation is the ability to establish a “shared plan”—which defines the interlaced actions of the two cooperating agents—in real time, and even to negotiate this shared plan during its execution. In the current research we identify the requirements for cooperation, extending our earlier work in this area. These requirements include the ability to negotiate a shared plan using spoken language, to learn new component actions within that plan, based on visual observation and kinesthetic demonstration, and finally to coordinate all of these functions in real time. We present a cognitive system that implements these requirements, and demonstrate the systems ability to allow a Nao humanoid robot to learn a nontrivial cooperative task in real-time. We further provide a concrete demonstration of how the real-time learning capability can be easily deployed on a different platform, in this case the iCub humanoid. The results are considered in the context of how the development of language in the human infant provides a powerful lever in the development of cooperative plans from lower-level sensorimotor capabilities.


international conference on robotics and automation | 2014

Three-Finger Precision Grasp on Incomplete 3D Point Clouds

Ilaria Gori; Ugo Pattacini; Vadim Tikhanoff; Giorgio Metta

We present a novel method for three-finger precision grasp and its implementation in a complete grasping tool-chain. We start from binocular vision to recover the partial 3D structure of unknown objects. We then process the incomplete 3D point clouds searching for good triplets according to a function that accounts for both the feasibility and the stability of the solution. In particular, while stability is determined using the classical force-closure approach, feasibility is evaluated according to a new measure that includes information about the possible configuration shapes of the hand as well as the hands inverse kinematics. We finally extensively assess the proposed method using the stereo vision and the kinematics of the iCub robot.


intelligent robots and systems | 2013

Cooperative human robot interaction systems: IV. Communication of shared plans with Naïve humans using gaze and speech

Stéphane Lallée; Katharina Hamann; Jasmin Steinwender; Felix Warneken; Uriel Martienz; Hector Barron-Gonzales; Ugo Pattacini; Ilaria Gori; Maxime Petit; Giorgio Metta; Paul F. M. J. Verschure; Peter Ford Dominey

Cooperation1 is at the core of human social life. In this context, two major challenges face research on humanrobot interaction: the first is to understand the underlying structure of cooperation, and the second is to build, based on this understanding, artificial agents that can successfully and safely interact with humans. Here we take a psychologically grounded and human-centered approach that addresses these two challenges. We test the hypothesis that optimal cooperation between a naïve human and a robot requires that the robot can acquire and execute a joint plan, and that it communicates this joint plan through ecologically valid modalities including spoken language, gesture and gaze. We developed a cognitive system that comprises the human-like control of social actions, the ability to acquire and express shared plans and a spoken language stage. In order to test the psychological validity of our approach we tested 12 naïve subjects in a cooperative task with the robot. We experimentally manipulated the presence of a joint plan (vs. a solo plan), the use of task-oriented gaze and gestures, and the use of language accompanying the unfolding plan. The quality of cooperation was analyzed in terms of proper turn taking, collisions and cognitive errors. Results showed that while successful turn taking could take place in the absence of the explicit use of a joint plan, its presence yielded significantly greater success. One advantage of the solo plan was that the robot would always be ready to generate actions, and could thus adapt if the human intervened at the wrong time, whereas in the joint plan the robot expected the human to take his/her turn. Interestingly, when the robot represented the action as involving a joint plan, gaze provided a highly potent nonverbal cue that facilitated successful collaboration and reduced errors in the absence of verbal communication. These results support the cooperative stance in human social cognition, and suggest that cooperative robots should employ joint plans, fully communicate them in order to sustain effective collaboration while being ready to adapt if the human makes a midstream mistake.


iberian conference on pattern recognition and image analysis | 2013

One-Shot Learning for Real-Time Action Recognition

Sean Ryan Fanello; Ilaria Gori; Giorgio Metta; Francesca Odone

The goal of the paper is to develop a one-shot real-time learning and recognition system for 3D actions. We use RGBD images, combine motion and appearance cues, and map them into a new overcomplete space. The proposed method relies on descriptors based on 3D Histogram of Flow (3DHOF) and on Global Histogram of Oriented Gradient (GHOG); adaptive sparse coding (SC) is further applied to capture high-level patterns. We add effective on-line video segmentation and finally the recognition of actions through linear SVMs. The main contribution of the paper is a real-time system for one-shot action modeling; moreover we highlight the effectiveness of sparse coding techniques to represent 3D actions. We obtain very good results on the ChaLearn Gesture Dataset and with a Kinect sensor.


ieee-ras international conference on humanoid robots | 2014

3D stereo estimation and fully automated learning of eye-hand coordination in humanoid robots

Sean Ryan Fanello; Ugo Pattacini; Ilaria Gori; Vadim Tikhanoff; Marco Randazzo; Alessandro Roncone; Francesca Odone; Giorgio Metta

This paper deals with the problem of 3D stereo estimation and eye-hand calibration in humanoid robots. We first show how to implement a complete 3D stereo vision pipeline, enabling online and real-time eye calibration. We then introduce a new formulation for the problem of eye-hand coordination. We developed a fully automated procedure that does not require human supervision. The end-effector of the humanoid robot is automatically detected in the stereo images, providing large amounts of training data for learning the vision-to-kinematics mapping. We report exhaustive experiments using different machine learning techniques; we show that a mixture of linear transformations can achieve the highest accuracy in the shortest amount of time, while guaranteeing real-time performance. We demonstrate the application of the proposed system in two typical robotic scenarios: (1) object grasping and tool use; (2) 3D scene reconstruction. The platform of choice is the iCub humanoid robot.


british machine vision conference | 2014

Online Action Recognition via Nonparametric Incremental Learning.

Rocco De Rosa; Nicolò Cesa-Bianchi; Ilaria Gori; Fabio Cuzzolin

We introduce an online action recognition system that can be combined with any set of frame-by-frame feature descriptors. Our system covers the frame feature space with classifiers whose distribution adapts to the hardness of locally approximating the Bayes optimal classifier. An efficient nearest neighbour search is used to find and combine the local classifiers that are closest to the frames of a new video to be classified. The advantages of our approach are: incremental training, frame by frame real-time prediction, nonparametric predictive modelling, video segmentation for continuous action recognition, no need to trim videos to equal lengths and only one tuning parameter (which, for large datasets, can be safely set to the diameter of the feature space). Experiments on standard benchmarks show that our system is competitive with state-of-the-art nonincremental and incremental baselines. keywords: action recognition, incremental learning, continuous action recognition, nonparametric model, real time, multivariate time series classification, temporal classification


international conference on advanced robotics | 2013

Ranking the good points: A comprehensive method for humanoid robots to grasp unknown objects

Ilaria Gori; Ugo Pattacini; Vadim Tikhanoff; Giorgio Metta

We propose a grasping pipeline to deal with unknown objects in the real world. We focus on power grasp, which is characterized by large areas of contact between the object and the surfaces of the palm and fingers. Our method seeks object regions that match the curvature of the robots palm. The entire procedure relies on binocular vision, which provides a 3D point cloud of the visible part of the object. The obtained point cloud is segmented in smooth surfaces. A score function measures the quality of the graspable points on the basis of the surface they belong to. A component of the score function is learned from experience and it is used to map the curvature of the object surfaces to the curvature of the robots hand.The user can further provide top-down information on the preferred grasping regions. We guarantee the feasibility of a chosen hand configuration by measuring its manipulability. We prove the effectiveness of the proposed approach by tasking a humanoid robot to grasp a number of unknown real objects.


ieee-ras international conference on humanoid robots | 2012

DForC: A real-time method for reaching, tracking and obstacle avoidance in humanoid robots

Ilaria Gori; Ugo Pattacini; Francesco Nori; Giorgio Metta; Giulio Sandini

We present the Dynamic Force Field Controller (DForC), a reliable and effective framework in the context of humanoid robotics for real-time reaching and tracking in presence of obstacles. It is inspired by well established works based on artificial potential fields, providing a robust basis for sidestepping a number of issues related to inverse kinematics of complex manipulators. DForC is composed of two layers organized in descending order of abstraction: (1) at the highest level potential fields are employed to outline on the fly collision-free trajectories that serve to drive the robot end-effector toward fixed or moving targets while accounting for obstacles; (2) at the bottom level an optimization algorithm is exploited in place of traditional techniques that resort to the Transposed or Pseudo-Inverse Jacobian, in order to deal with constraints specified in the joints space and additional conditions related to the robot structure. As demonstrated by experiments conducted on the iCub robot, our method reveals to be particularly flexible with respect to environmental changes allowing for a safe tracking procedure, and generating reliable paths in practically every situation.


2013 IEEE Workshop on Robot Vision (WORV) | 2013

A compositional approach for 3D arm-hand action recognition

Ilaria Gori; Sean Ryan Fanello; Francesca Odone; Giorgio Metta

In this paper we propose a fast and reliable vision-based framework for 3D arm-hand action modelling, learning and recognition in human-robot interaction scenarios. The architecture consists of a compositional model that divides the arm-hand action recognition problem into three levels. The bottom level is based on a simple but sufficiently accurate algorithm for the computation of the scene flow. The middle level serves to classify action primitives through descriptors obtained from 3D Histogram of Flow (3D-HOF); we further apply a sparse coding (SC) algorithm to deal with noise. Action Primitives are then modelled and classified by linear Support Vector Machines (SVMs), and we propose an on-line algorithm to cope with the real-time recognition of primitive sequences. The top level system synthesises combinations of primitives by means of a syntactic approach. In summary the main contribution of the paper is an incremental method for 3D arm-hand behaviour modelling and recognition, fully implemented and tested on the iCub robot, allowing it to learn new actions after a single demonstration.


ieee-ras international conference on humanoid robots | 2012

All gestures you can: A memory game against a humanoid robot

Ilaria Gori; Sean Ryan Fanello; Giorgio Metta; Francesca Odone

We address the problem of real-time gesture recognition, and we prove that our system can be used in real scenarios presenting an original memory game; the object of the game is to perform the longest sequence of gestures that it is possible to remember. We explore the human-robot interaction field, letting the player confront a humanoid robot, iCub. Our main contribution is two-fold: on one hand, we present a robust and real-time gesture recognition system; on the other hand, we place the presented system in a real scenario, where its reliability and its effectiveness are remarkably stressed. This game has ranked 2nd at ChaLearn Kinect Demonstration Competition1.

Collaboration


Dive into the Ilaria Gori's collaboration.

Top Co-Authors

Avatar

Giorgio Metta

Istituto Italiano di Tecnologia

View shared research outputs
Top Co-Authors

Avatar

Sean Ryan Fanello

Istituto Italiano di Tecnologia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ugo Pattacini

Istituto Italiano di Tecnologia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Vadim Tikhanoff

Istituto Italiano di Tecnologia

View shared research outputs
Top Co-Authors

Avatar

Fabio Cuzzolin

Oxford Brookes University

View shared research outputs
Top Co-Authors

Avatar

Fiora Pirri

Sapienza University of Rome

View shared research outputs
Top Co-Authors

Avatar

A. Carrano

Sapienza University of Rome

View shared research outputs
Researchain Logo
Decentralizing Knowledge