Abolfazl Zaraki
University of Pisa
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Abolfazl Zaraki.
IEEE Transactions on Human-Machine Systems | 2014
Abolfazl Zaraki; Daniele Mazzei; Manuel Giuliani; Danilo De Rossi
This paper describes a context-dependent social gaze-control system implemented as part of a humanoid social robot. The system enables the robot to direct its gaze at multiple humans who are interacting with each other and with the robot. The attention mechanism of the gaze-control system is based on features that have been proven to guide human attention: nonverbal and verbal cues, proxemics, the visual field of view, and the habituation effect. Our gaze-control system uses Kinect skeleton tracking together with speech recognition and SHORE-based facial expression recognition to implement the same features. As part of a pilot evaluation, we collected the gaze behavior of 11 participants in an eye-tracking study. We showed participants videos of two-person interactions and tracked their gaze behavior. A comparison of the human gaze behavior with the behavior of our gaze-control system running on the same videos shows that it replicated human gaze behavior 89% of the time.
conference on biomimetic and biohybrid systems | 2013
Nicole Lazzeri; Daniele Mazzei; Abolfazl Zaraki; Danilo De Rossi
Two perspectives define a human being in his social sphere: appearance and behaviour. The aesthetic aspect is the first significant element that impacts a communication while the behavioural aspect is a crucial factor in evaluating the ongoing interaction. In particular, we have more expectations when interacting with anthropomorphic robots and we tend to define them believable if they respect human social conventions. Therefore researchers are focused both on increasingly anthropomorphizing the embodiment of the robots and on giving the robots a realistic behaviour. This paper describes our research on making a humanoid robot socially interacting with human beings in a believable way.
conference on biomimetic and biohybrid systems | 2014
Daniele Mazzei; Lorenzo Cominelli; Nicole Lazzeri; Abolfazl Zaraki; Danilo De Rossi
Sensing and interpreting the interlocutor’s social behaviours is a core challenge in the development of social robots. Social robots require both an innovative sensory apparatus able to perceive the “social and emotional world” in which they act and a cognitive system able to manage this incoming sensory information and plan an organized and pondered response. In order to allow scientists to design cognitive models for this new generation of social machines, it is necessary to develop control architectures that can be easily used also by researchers without technical skills of programming such as psychologists and neuroscientists. In this work an innovative hybrid deliberative/reactive cognitive architecture for controlling a social humanoid robot is presented. Design and implementation of the overall architecture take inspiration from the human nervous system. In particular, the cognitive system is based on the Damasio’s thesis. The architecture has been preliminary tested with the FACE robot. A social behaviour has been modeled to make FACE able to properly follow a human subject during a basic social interaction task and perform facial expressions as a reaction to the social context.
conference on biomimetic and biohybrid systems | 2013
Abolfazl Zaraki; Daniele Mazzei; Nicole Lazzeri; Michael Pieroni; Danilo De Rossi
A context-aware attention system is fundamental for regulating the robot behaviour in a social interaction since it enables social robots to actively select the right environmental stimuli at the right time during a multiparty social interaction. This contribution presents a modular context-aware attention system which drives the robot gaze. It is composed by two modules: the scene analyzer module manages incoming data flow and provides a human-like understanding of the information coming from the surrounding environment; the attention module allows the robot to select the most important target in the perceived scene on the base of a computational model. After describing the motivation, we report the proposed system and the preliminary test.
IEEE Transactions on Cognitive and Developmental Systems | 2017
Abolfazl Zaraki; Michael Pieroni; Danilo De Rossi; Daniele Mazzei; Roberto Garofalo; Lorenzo Cominelli; Maryam Banitalebi Dehkordi
Robot’s perception is essential for performing high-level tasks such as understanding, learning, and in general, human–robot interaction (HRI). For this reason, different perception systems have been proposed for different robotic platforms in order to detect high-level features such as facial expressions and body gestures. However, due to the variety of robotics software architectures and hardware platforms, these highly customized solutions are hardly interchangeable and adaptable to different HRI contexts. In addition, most of the developed systems have one issue in common: they detect features without awareness of the real-world contexts (e.g., detection of environmental sound assuming that it belongs to a person who is speaking, or treating a face printed on a sheet of paper as belonging to a real subject). This paper presents a novel social perception system (SPS) that has been designed to address the previous issues. SPS is an out-of-the-box system that can be integrated into different robotic platforms irrespective of hardware and software specifications. SPS detects, tracks, and delivers in real-time to robots, a wide range of human- and environment- relevant features with the awareness of their real-world contexts. We tested SPS in a typical scenario of HRI for the following purposes: to demonstrate the system capability in detecting several high-level perceptual features as well as to test the system capability to be integrated into different robotics platforms. Results show the promising capability of the system in perceiving real world in different social robotics platforms, as tested in two humanoid robots, i.e., FACE and ZENO.
conference on biomimetic and biohybrid systems | 2015
Lorenzo Cominelli; Daniele Mazzei; Michael Pieroni; Abolfazl Zaraki; Roberto Garofalo; Danilo De Rossi
How experienced emotional states, induced by the events that emerge in our context, influence our behaviour? Are they an obstacle or a helpful assistant for our reasoning process? Antonio Damasio gave exhaustive answers to these questions through his studies on patients with brain injuries. He demonstrated how the emotions guide decision-making and he has identified a region of the brain which has a fundamental role in this process. Antoine Bechara devised a test to validate the proper functioning of that cortical region of the brain. Inspired from Damasios theories we developed a mechanism in an artificial agent that enables it to represent emotional states and to exploit them for biasing its decisions. We also implement the card gambling task that Bechara used on his patients as a validating test. Finally we put our artificial agent through this test for 100 trials. The results of this experiment are analysed and discussed highlighting the demonstrated efficiency of the implemented somatic marker mechanism and the potential impact of this system in the field of social robotics.
multimedia interaction design and innovation | 2013
Agata Pasikowska; Abolfazl Zaraki; Nicole Lazzeri
Computer, tablet and smartphone are tools that increasingly accompany us during everyday activities. Given the booming use of the virtual reality and the wide range of people who have access to it, people are increasingly presented with an online alternative to the support of professionals, therapeutic groups organized by healthcare institutions, or significant others (such as family, friends and colleagues). This can be used as a tool for personal development and to cope with stress. Our research program includes creating a virtual reality application to sustain well-being and improve quality of life. It assumes that avatars, representations of a person in the cyberspace, will provide support in the form of a virtual conversation. Dialogue with an imaginary person is as a supportive technique in a stressful situation as creating the list of solutions and on a long term period it can create a specific way to reach the desired change.
conference on biomimetic and biohybrid systems | 2014
Abolfazl Zaraki; Maryam Banitalebi Dehkordi; Daniele Mazzei; Danilo De Rossi
Human gaze and blinking behaviours have been recently considered, to empower humanlike robots to convey a realistic behaviour in a social human-robot interaction. This paper reports the findings of our investigation on human eye-blinking behaviour in relation to human gaze behaviour, in a human-human interaction. These findings then can be used to design a humanlike eye-blinking model for a social humanlike robot. In an experimental eye-tracking study, we showed to 11 participants, a 7-minute video of social interactions of two people, and collected their eye-blinking and gaze behaviours with an eye-tracker. Analysing the collected data, we measured information such as participants’ blinking rate, maximum and minimum blinking duration, number of frequent (multiple) blinking, as well as the participants’ gaze directions on environment. The results revealed that participants’ blinking rate in a social interaction are qualitatively correlated to the gaze behaviour, as higher number of gaze shift increased the blinking rate. Based on the findings of this study, we can propose a context-dependent blinking model as an important component of the robot’s gaze control system that can empower our robot to mimic human blinking behaviour in a multiparty social interaction.
conference on biomimetic and biohybrid systems | 2016
Lorenzo Cominelli; Daniele Mazzei; Nicola Carbonaro; Roberto Garofalo; Abolfazl Zaraki; Alessandro Tognetti; Danilo De Rossi
Building a social robot that is able to interact naturally with people is a challenging task that becomes even more ambitious if the robots’ interlocutors are children involved in crowded scenarios like a classroom or a museum. In such scenarios, the main concern is enabling the robot to track the subjects’ social and affective state modulating its behaviour on the basis of the engagement and the emotional state of its interlocutors. To reach this goal, the robot needs to gather visual and auditory data, but also to acquire physiological signals, which are fundamental for understating the interlocutors’ psycho-physiological state. Following this purpose, several Human-Robot Interaction (HRI) frameworks have been proposed in the last years, although most of them have been based on the use of wearable sensors. However, wearable equipments are not the best technology for acquisition in crowded multi-party environments for obvious reasons (e.g., all the subjects should be prepared before the experiment by wearing the acquisition devices). Furthermore, wearable sensors, also if designed to be minimally intrusive, add an extra factor to the HRI scenarios, introducing a bias in the measurements due to psychological stress. In order to overcome this limitations, in this work, we present an unobtrusive method to acquire both visual and physiological signals from multiple subjects involved in HRI. The system is able to integrate acquired data and associate them with unique subjects’ IDs. The implemented system has been tested with the FACE humanoid in order to assess integrated devices and algorithms technical features. Preliminary tests demonstrated that the developed system can be used for extending the FACE perception capabilities giving it a sort of sixth sense that will improve the robot empathic and behavioural capabilities.
conference on biomimetic and biohybrid systems | 2014
Abolfazl Zaraki; Mb. Dehkordi; Daniele Mazzei; Danilo De Rossi
Human gaze and blinking behaviours have been recently considered, to empower humanlike robots to convey a realistic behaviour in a social human-robot interaction. This paper reports the findings of our investigation on human eye-blinking behaviour in relation to human gaze behaviour, in a human-human interaction. These findings then can be used to design a humanlike eye-blinking model for a social humanlike robot. In an experimental eye-tracking study, we showed to 11 participants, a 7-minute video of social interactions of two people, and collected their eye-blinking and gaze behaviours with an eye-tracker. Analysing the collected data, we measured information such as participants’ blinking rate, maximum and minimum blinking duration, number of frequent (multiple) blinking, as well as the participants’ gaze directions on environment. The results revealed that participants’ blinking rate in a social interaction are qualitatively correlated to the gaze behaviour, as higher number of gaze shift increased the blinking rate. Based on the findings of this study, we can propose a context-dependent blinking model as an important component of the robot’s gaze control system that can empower our robot to mimic human blinking behaviour in a multiparty social interaction.