Nicole Lazzeri
University of Pisa
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Nicole Lazzeri.
robot and human interactive communication | 2010
Daniele Mazzei; Lucia Billeci; Antonino Armato; Nicole Lazzeri; Antonio Cisternino; Giovanni Pioggia; Roberta Igliozzi; Filippo Muratori; Arti Ahluwalia; Danilo De Rossi
People with autism are known to possess deficits in processing emotional states, both their own and of others. A humanoid robot, FACE (Facial Automation for Conveying Emotions), capable of expressing and conveying emotions and empathy has been constructed to enable autistic children and adults to better deal with emotional and expressive information. We describe the development of an adaptive therapeutic platform which integrates information deriving from wearable sensors carried by a patient or subject as well as sensors placed in the therapeutic ambient. Through custom developed control and data processing algorithms the expressions and movements of FACE are then tuned and modulated to harmonize with the feelings of the subject postulated by their physiological and behavioral correlates. Preliminary results demonstrating the potential of adaptive therapy are presented.
international conference of the ieee engineering in medicine and biology society | 2011
Daniele Mazzei; Nicole Lazzeri; Lucia Billeci; Roberta Igliozzi; Alice Mancini; Arti Ahluwalia; Filippo Muratori; Danilo De Rossi
People with ASD (Autism Spectrum Disorders) have difficulty in managing interpersonal relationships and common life social situations. A modular platform for Human Robot Interaction and Human Machine Interaction studies has been developed to manage and analyze therapeutic sessions in which subjects are driven by a psychologist through simulated social scenarios. This innovative therapeutic approach uses a humanoid robot called FACE capable of expressing and conveying emotions and empathy. Using FACE as a social interlocutor the psychologist can emulate real life scenarios where the emotional state of the interlocutor is adaptively adjusted through a semi closed loop control algorithm which uses the ASD subjects inferred ”affective” state as input. Preliminary results demonstrate that the platform is well accepted by ASDs and can be consequently used as novel therapy for social skills training.
conference on biomimetic and biohybrid systems | 2013
Nicole Lazzeri; Daniele Mazzei; Abolfazl Zaraki; Danilo De Rossi
Two perspectives define a human being in his social sphere: appearance and behaviour. The aesthetic aspect is the first significant element that impacts a communication while the behavioural aspect is a crucial factor in evaluating the ongoing interaction. In particular, we have more expectations when interacting with anthropomorphic robots and we tend to define them believable if they respect human social conventions. Therefore researchers are focused both on increasingly anthropomorphizing the embodiment of the robots and on giving the robots a realistic behaviour. This paper describes our research on making a humanoid robot socially interacting with human beings in a believable way.
Frontiers in Bioengineering and Biotechnology | 2015
Nicole Lazzeri; Daniele Mazzei; Alberto Greco; Antonio Lanata; Danilo De Rossi
Non-verbal signals expressed through body language play a crucial role in multi-modal human communication during social relations. Indeed, in all cultures, facial expressions are the most universal and direct signs to express innate emotional cues. A human face conveys important information in social interactions and helps us to better understand our social partners and establish empathic links. Latest researches show that humanoid and social robots are becoming increasingly similar to humans, both esthetically and expressively. However, their visual expressiveness is a crucial issue that must be improved to make these robots more realistic and intuitively perceivable by humans as not different from them. This study concerns the capability of a humanoid robot to exhibit emotions through facial expressions. More specifically, emotional signs performed by a humanoid robot have been compared with corresponding human facial expressions in terms of recognition rate and response time. The set of stimuli included standardized human expressions taken from an Ekman-based database and the same facial expressions performed by the robot. Furthermore, participants’ psychophysiological responses have been explored to investigate whether there could be differences induced by interpreting robot or human emotional stimuli. Preliminary results show a trend to better recognize expressions performed by the robot than 2D photos or 3D models. Moreover, no significant differences in the subjects’ psychophysiological state have been found during the discrimination of facial expressions performed by the robot in comparison with the same task performed with 2D photos and 3D models.
conference on biomimetic and biohybrid systems | 2014
Daniele Mazzei; Lorenzo Cominelli; Nicole Lazzeri; Abolfazl Zaraki; Danilo De Rossi
Sensing and interpreting the interlocutor’s social behaviours is a core challenge in the development of social robots. Social robots require both an innovative sensory apparatus able to perceive the “social and emotional world” in which they act and a cognitive system able to manage this incoming sensory information and plan an organized and pondered response. In order to allow scientists to design cognitive models for this new generation of social machines, it is necessary to develop control architectures that can be easily used also by researchers without technical skills of programming such as psychologists and neuroscientists. In this work an innovative hybrid deliberative/reactive cognitive architecture for controlling a social humanoid robot is presented. Design and implementation of the overall architecture take inspiration from the human nervous system. In particular, the cognitive system is based on the Damasio’s thesis. The architecture has been preliminary tested with the FACE robot. A social behaviour has been modeled to make FACE able to properly follow a human subject during a basic social interaction task and perform facial expressions as a reaction to the social context.
conference on biomimetic and biohybrid systems | 2013
Abolfazl Zaraki; Daniele Mazzei; Nicole Lazzeri; Michael Pieroni; Danilo De Rossi
A context-aware attention system is fundamental for regulating the robot behaviour in a social interaction since it enables social robots to actively select the right environmental stimuli at the right time during a multiparty social interaction. This contribution presents a modular context-aware attention system which drives the robot gaze. It is composed by two modules: the scene analyzer module manages incoming data flow and provides a human-like understanding of the information coming from the surrounding environment; the attention module allows the robot to select the most important target in the perceived scene on the base of a computational model. After describing the motivation, we report the proposed system and the preliminary test.
multimedia interaction design and innovation | 2013
Agata Pasikowska; Abolfazl Zaraki; Nicole Lazzeri
Computer, tablet and smartphone are tools that increasingly accompany us during everyday activities. Given the booming use of the virtual reality and the wide range of people who have access to it, people are increasingly presented with an online alternative to the support of professionals, therapeutic groups organized by healthcare institutions, or significant others (such as family, friends and colleagues). This can be used as a tool for personal development and to cope with stress. Our research program includes creating a virtual reality application to sustain well-being and improve quality of life. It assumes that avatars, representations of a person in the cyberspace, will provide support in the form of a virtual conversation. Dialogue with an imaginary person is as a supportive technique in a stressful situation as creating the list of solutions and on a long term period it can create a specific way to reach the desired change.
International Journal of Advanced Robotic Systems | 2018
Nicole Lazzeri; Daniele Mazzei; Maher Ben Moussa; Nadia Magnenat-Thalmann; Danilo De Rossi
Human communication relies mostly on nonverbal signals expressed through body language. Facial expressions, in particular, convey emotional information that allows people involved in social interactions to mutually judge the emotional states and to adjust its behavior appropriately. First studies aimed at investigating the recognition of facial expressions were based on static stimuli. However, facial expressions are rarely static, especially in everyday social interactions. Therefore, it has been hypothesized that the dynamics inherent in a facial expression could be fundamental in understanding its meaning. In addition, it has been demonstrated that nonlinguistic and linguistic information can contribute to reinforce the meaning of a facial expression making it easier to be recognized. Nevertheless, few studies have been performed on realistic humanoid robots. This experimental work aimed at demonstrating the human-like expressive capability of a humanoid robot by examining whether the effect of motion and vocal content influenced the perception of its facial expressions. The first part of the experiment aimed at studying the recognition capability of two kinds of stimuli related to the six basic expressions (i.e. anger, disgust, fear, happiness, sadness, and surprise): static stimuli, that is, photographs, and dynamic stimuli, that is, video recordings. The second and third parts were focused on comparing the same six basic expressions performed by a virtual avatar and by a physical robot under three different conditions: (1) muted facial expressions, (2) facial expressions with nonlinguistic vocalizations, and (3) facial expressions with an emotionally neutral verbal sentence. The results show that static stimuli performed by a human being and by the robot were more ambiguous than the corresponding dynamic stimuli on which motion and vocalization were associated. This hypothesis has been also investigated with a 3-dimensional replica of the physical robot demonstrating that even in case of a virtual avatar, dynamic and vocalization improve the emotional conveying capability.
ieee international conference on biomedical robotics and biomechatronics | 2012
Daniele Mazzei; Nicole Lazzeri; David Hanson; Danilo De Rossi
privacy security risk and trust | 2012
Daniele Mazzei; Alberto Greco; Nicole Lazzeri; Abolfazl Zaraki; Antonio Lanata; Roberta Igliozzi; Alice Mancini; Francesca Stoppa; Enzo Pasquale Scilingo; Filippo Muratori; Danilo De Rossi