Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Serena Ivaldi is active.

Publication


Featured researches published by Serena Ivaldi.


International Journal of Social Robotics | 2015

Evaluating the Engagement with Social Robots

Salvatore Maria Anzalone; Sofiane Boucenna; Serena Ivaldi; Mohamed Chetouani

To interact and cooperate with humans in their daily-life activities, robots should exhibit human-like “intelligence”. This skill will substantially emerge from the interconnection of all the algorithms used to ensure cognitive and interaction capabilities. While new robotics technologies allow us to extend such abilities, their evaluation for social interaction is still challenging. The quality of a human–robot interaction can not be reduced to the evaluation of the employed algorithms: we should integrate the engagement information that naturally arises during interaction in response to the robot’s behaviors. In this paper we want to show a practical approach to evaluate the engagement aroused during interactions between humans and social robots. We will introduce a set of metrics useful in direct, face to face scenarios, based on the behaviors analysis of the human partners. We will show how such metrics are useful to assess how the robot is perceived by humans and how this perception changes according to the behaviors shown by the social robot. We discuss experimental results obtained in two human-interaction studies, with the robots Nao and iCub respectively.


ieee-ras international conference on humanoid robots | 2013

Learning compact parameterized skills with a single regression

Freek Stulp; Gennaro Raiola; Antoine Hoarau; Serena Ivaldi; Olivier Sigaud

One of the long-term challenges of programming by demonstration is achieving generality, i.e. automatically adapting the reproduced behavior to novel situations. A common approach for achieving generality is to learn parameterizable skills from multiple demonstrations for different situations. In this paper, we generalize recent approaches on learning parameterizable skills based on dynamical movement primitives (DMPs), such that task parameters are also passed as inputs to the function approximator of the DMP. This leads to a more general, flexible, and compact representation of parameterizable skills, as demonstrated by our empirical evaluation on the iCub and Meka humanoid robots.


Computers in Human Behavior | 2016

Trust as indicator of robot functional and social acceptance. An experimental study on user conformation to iCub answers

Ilaria Gaudiello; Elisabetta Zibetti; Sébastien Lefort; Mohamed Chetouani; Serena Ivaldi

To investigate the functional and social acceptance of a humanoid robot, we carried out an experimental study with 56 adult participants and the iCub robot. Trust in the robot has been considered as a main indicator of acceptance in decision-making tasks characterized by perceptual uncertainty (e.g., evaluating the weight of two objects) and socio-cognitive uncertainty (e.g., evaluating which is the most suitable item in a specific context), and measured by the participants conformation to the iCubs answers to specific questions. In particular, we were interested in understanding whether specific (i) user-related features (i.e. desire for control), (ii) robot-related features (i.e., attitude towards social influence of robots), and (iii) context-related features (i.e., collaborative vs. competitive scenario), may influence their trust towards the iCub robot. We found that participants conformed more to the iCubs answers when their decisions were about functional issues than when they were about social issues. Moreover, the few participants conforming to the iCubs answers for social issues also conformed less for functional issues. Trust in the robots functional savvy does not thus seem to be a pre-requisite for trust in its social savvy. Finally, desire for control, attitude towards social influence of robots and type of interaction scenario did not influence the trust in iCub. Results are discussed with relation to methodology of HRI research.


ieee-ras international conference on humanoid robots | 2014

Tools for simulating humanoid robot dynamics: A survey based on user feedback

Serena Ivaldi; Jan Peters; Vincent Padois; Francesco Nori

The number of tools for dynamics simulation has grown substantially in the last few years. Humanoid robots, in particular, make extensive use of such tools for a variety of applications, from simulating contacts to planning complex motions. It is necessary for the humanoid robotics community to have a systematic evaluation to assist in choosing which of the available tools is best for their research. This paper surveys the state of the art in dynamics simulation and reports on the analysis of an online survey about the use of dynamics simulation in the robotics research community. The major requirements for robotics researchers are better physics engines and open-source software. Despite the numerous tools, there is not a general-purpose simulator which dominates the others in terms of performance or application. However, for humanoid robotics, Gazebo emerges as the best choice among the open-source projects, while V-Rep is the preferred commercial simulator. The survey report has been instrumental for choosing Gazebo as the base for the new simulator for the iCub humanoid robot.


International Journal of Social Robotics | 2017

Towards engagement models that consider individual factors in HRI: on the relation of extroversion and negative attitude towards robots to gaze and speech during a human-robot assembly task

Serena Ivaldi; Sébastien Lefort; Jan Peters; Mohamed Chetouani; Joelle Provasi; Elisabetta Zibetti

Estimating the engagement is critical for human–robot interaction. Engagement measures typically rely on the dynamics of the social signals exchanged by the partners, especially speech and gaze. However, the dynamics of these signals are likely to be influenced by individual and social factors, such as personality traits, as it is well documented that they critically influence how two humans interact with each other. Here, we assess the influence of two factors, namely extroversion and negative attitude toward robots, on speech and gaze during a cooperative task, where a human must physically manipulate a robot to assemble an object. We evaluate if the score of extroversion and negative attitude towards robots co-variate with the duration and frequency of gaze and speech cues. The experiments were carried out with the humanoid robot iCub and Nxa0=xa056 adult participants. We found that the more people are extrovert, the more and longer they tend to talk with the robot; and the more people have a negative attitude towards robots, the less they will look at the robot face and the more they will look at the robot hands where the assembly and the contacts occur. Our results confirm and provide evidence that the engagement models classically used in human–robot interaction should take into account attitudes and personality traits.


international conference on robotics and automation | 2015

Learning inverse dynamics models with contacts

Roberto Calandra; Serena Ivaldi; Marc Peter Deisenroth; Elmar Rueckert; Jan Peters

In whole-body control, joint torques and external forces need to be estimated accurately. In principle, this can be done through pervasive joint-torque sensing and accurate system identification. However, these sensors are expensive and may not be integrated in all links. Moreover, the exact position of the contact must be known for a precise estimation. If contacts occur on the whole body, tactile sensors can estimate the contact location, but this requires a kinematic spatial calibration, which is prone to errors. Accumulating errors may have dramatic effects on the system identification. As an alternative to classical model-based approaches we propose a data-driven mixture-of-experts learning approach using Gaussian processes. This model predicts joint torques directly from raw data of tactile and force/torque sensors. We compare our approach to an analytic model-based approach on real world data recorded from the humanoid iCub. We show that the learned model accurately predicts the joint torques resulting from contact forces, is robust to changes in the environment and outperforms existing dynamic models that use of force/ torque sensor data.


Autonomous Robots | 2016

From passive to interactive object learning and recognition through self-identification on a humanoid robot

Natalia Lyubova; Serena Ivaldi; David Filliat

Service robots, working in evolving human environments, need the ability to continuously learn to recognize new objects. Ideally, they should act as humans do, by observing their environment and interacting with objects, without specific supervision. Taking inspiration from infant development, we propose a developmental approach that enables a robot to progressively learn objects appearances in a social environment: first, only through observation, then through active object manipulation. We focus on incremental, continuous, and unsupervised learning that does not require prior knowledge about the environment or the robot. In the first phase, we analyse the visual space and detect proto-objects as units of attention that are learned and recognized as possible physical entities. The appearance of each entity is represented as a multi-view model based on complementary visual features. In the second phase, entities are classified into three categories: parts of the body of the robot, parts of a human partner, and manipulable objects. The categorization approach is based on mutual information between the visual and proprioceptive data, and on motion behaviour of entities. The ability to categorize entities is then used during interactive object exploration to improve the previously acquired objects models. The proposed system is implemented and evaluated with an iCub and a Meka robot learning 20 objects. The system is able to recognize objects with 88.5xa0% success and create coherent representation models that are further improved by interactive learning.


international conference on robotics and automation | 2016

Learning soft task priorities for control of redundant robots

Valerio Modugno; Gerard Neumann; Elmar Rueckert; Giuseppe Oriolo; Jan Peters; Serena Ivaldi

One of the key problems in planning and control of redundant robots is the fast generation of controls when multiple tasks and constraints need to be satisfied. In the literature, this problem is classically solved by multi-task prioritized approaches, where the priority of each task is determined by a weight function, describing the task strict/soft priority. In this paper, we propose to leverage machine learning techniques to learn the temporal profiles of the task priorities, represented as parametrized weight functions: we automatically determine their parameters through a stochastic optimization procedure. We show the effectiveness of the proposed method on a simulated 7 DOF Kuka LWR and both a simulated and a real Kinova Jaco arm. We compare the performance of our approach to a state-of-the-art method based on soft task prioritization, where the task weights are typically hand-tuned.


international conference on development and learning | 2014

Learning a repertoire of actions with deep neural networks

Alain Droniou; Serena Ivaldi; Olivier Sigaud

We address the problem of endowing a robot with the capability to learn a repertoire of actions using as little prior knowledge as possible. Taking a handwriting task as an example, we apply the deep learning paradigm to build a network which uses a high-level representation of digits to generate sequences of commands, directly fed to a low-level control loop. Discrete variables are used to discriminate different digits, while continuous variables parametrize each digit. We show that the proposed network is able to generalize learned actions to new contexts. The network is tested on trajectories recorded on the iCub humanoid robot.


Robotics and Autonomous Systems | 2014

Grasping objects localized from uncertain point cloud data

Jean-Philippe Saut; Serena Ivaldi; Anis Sahbani; Philippe Bidaud

Robotic grasping is very sensitive to how accurate is the pose estimation of the object to grasp. Even a small error in the estimated pose may cause the planned grasp to fail. Several methods for robust grasp planning exploit the object geometry or tactile sensor feedback. However, object pose range estimation introduces specific uncertainties that can also be exploited to choose more robust grasps. We present a grasp planning method that explicitly considers the uncertainties on the visually-estimated object pose. We assume a known shape (e.g. primitive shape or triangle mesh), observed as a-possibly sparse-point cloud. The measured points are usually not uniformly distributed over the surface as the object is seen from a particular viewpoint; additionally this non-uniformity can be the result of heterogeneous textures over the object surface, when using stereo-vision algorithms based on robust feature-point matching. Consequently the pose estimation may be more accurate in some directions and contain unavoidable ambiguities.The proposed grasp planner is based on a particle filter to estimate the object probability distribution as a discrete set. We show that, for grasping, some ambiguities are less unfavorable so the distribution can be used to select robust grasps. Some experiments are presented with the humanoid robot iCub and its stereo cameras. We propose a grasp planning approach for an object with a known shape, observed as a point cloud.We focus on uncertainties in the object pose distribution from a noisy point cloud.Experiments are conducted with the humanoid robot iCub and its stereo cameras.

Collaboration


Dive into the Serena Ivaldi's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Valerio Modugno

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Francesco Nori

Istituto Italiano di Tecnologia

View shared research outputs
Top Co-Authors

Avatar

Oriane Dermy

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Adrien Malaisé

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Elmar Rueckert

Technische Universität Darmstadt

View shared research outputs
Researchain Logo
Decentralizing Knowledge