Maha Salem
University of Hertfordshire
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Maha Salem.
human-robot interaction | 2015
Maha Salem; Gabriella Lakatos; Farshid Amirabdollahian; Kerstin Dautenhahn
How do mistakes made by a robot affect its trustworthiness and acceptance in human-robot collaboration? We investigate how the perception of erroneous robot behavior may influence human interaction choices and the willingness to cooperate with the robot by following a number of itsunusual requests. For this purpose, we conducted an experiment in which participants interacted with a home companion robot in one of two experimental conditions: (1) the correct modeor (2) the faulty mode. Our findings reveal that, while significantly affecting subjective perceptions of the robot and assessments of its reliability and trustworthiness, the robot’s performance does not seem to substantially influence participants’ decisions to (not) comply with its requests. However, our results further suggest that the nature of the task requested by the robot, e.g. whether its effects are revocable as opposed to irrevocable, has a significant impact on participants’ willingness to follow its instructions.
International Journal of Social Robotics | 2012
Maha Salem; Stefan Kopp; Ipke Wachsmuth; Katharina J. Rohlfing; Frank Joublin
How is communicative gesture behavior in robots perceived by humans? Although gesture is crucial in social interaction, this research question is still largely unexplored in the field of social robotics. Thus, the main objective of the present work is to investigate how gestural machine behaviors can be used to design more natural communication in social robots. The chosen approach is twofold. Firstly, the technical challenges encountered when implementing a speech-gesture generation model on a robotic platform are tackled. We present a framework that enables the humanoid robot to flexibly produce synthetic speech and co-verbal hand and arm gestures at run-time, while not being limited to a predefined repertoire of motor actions. Secondly, the achieved flexibility in robot gesture is exploited in controlled experiments. To gain a deeper understanding of how communicative robot gesture might impact and shape human perception and evaluation of human-robot interaction, we conducted a between-subjects experimental study using the humanoid robot in a joint task scenario. We manipulated the non-verbal behaviors of the robot in three experimental conditions, so that it would refer to objects by utilizing either (1) unimodal (i.e., speech only) utterances, (2) congruent multimodal (i.e., semantically matching speech and gesture) or (3) incongruent multimodal (i.e., semantically non-matching speech and gesture) utterances. Our findings reveal that the robot is evaluated more positively when non-verbal behaviors such as hand and arm gestures are displayed along with speech, even if they do not semantically match the spoken utterance.
robot and human interactive communication | 2011
Maha Salem; Katharina J. Rohlfing; Stefan Kopp; Frank Joublin
Gesture is an important feature of social interaction, frequently used by human speakers to illustrate what speech alone cannot provide, e.g. to convey referential, spatial or iconic information. Accordingly, humanoid robots that are intended to engage in natural human-robot interaction should produce speech-accompanying gestures for comprehensible and believable behavior. But how does a robots non-verbal behavior influence human evaluation of communication quality and the robot itself? To address this research question we conducted two experimental studies. Using the Honda humanoid robot we investigated how humans perceive various gestural patterns performed by the robot as they interact in a situational context. Our findings suggest that the robot is evaluated more positively when non-verbal behaviors such as hand and arm gestures are displayed along with speech. These findings were found to be enhanced when the participants were explicitly requested to direct their attention towards the robot during the interaction.
international conference on social robotics | 2011
Maha Salem; Friederike Anne Eyssel; Katharina J. Rohlfing; Stefan Kopp; Frank Joublin
Previous work has shown that gestural behaviors affect anthropomorphic inferences about artificial communicators such as virtual agents. In an experiment with a humanoid robot, we investigated to what extent gesture would affect anthropomorphic inferences about the robot. Particularly, we examined the effects of the robots hand and arm gestures on the attribution of typically human traits, likability of the robot, shared reality, and future contact intentions after interacting with the robot. For this, we manipulated the non-verbal behaviors of the humanoid robot in three experimental conditions: (1) no gesture, (2) congruent gesture, and (3) incongruent gesture. We hypothesized higher ratings on all dependent measures in the two gesture (vs. no gesture) conditions. The results confirm our predictions: when the robot used gestures during interaction, it was anthropomorphized more, participants perceived it as more likable, reported greater shared reality with it, and showed increased future contact intentions than when the robot gave instructions without using gestures. Surprisingly, this effect was particularly pronounced when the robots gestures were partly incongruent with speech. These findings show that communicative non-verbal behaviors in robotic systems affect both anthropomorphic perceptions and the mental models humans form of a humanoid robot during interaction.
IEEE Transactions on Human-Machine Systems | 2016
Matt Webster; Clare Dixon; Michael Fisher; Maha Salem; Joe Saunders; Kheng Lee Koay; Kerstin Dautenhahn; Joan Saez-Pons
It is essential for robots working in close proximity to people to be both safe and trustworthy. We present a case study on formal verification for a high-level planner/scheduler for the Care-O-bot, an autonomous personal robotic assistant. We describe how a model of the Care-O-bot and its environment was developed using Brahms, a multiagent workflow language. Formal verification was then carried out by automatically translating this model to the input language of an existing model checker. Four sample properties based on system requirements were verified. We then refined the environment model three times to increase its accuracy and the persuasiveness of the formal verification results. The first refinement uses a user activity log based on real-life experiments, but is deterministic. The second refinement uses the activities from the user activity log nondeterministically. The third refinement uses “conjoined activities” based on an observation that many user activities can overlap. The four samples properties were verified for each refinement of the environment model. Finally, we discuss the approach of environment model refinement with respect to this case study.
human-robot interaction | 2014
Maha Salem; Micheline Ziadee; Majd F. Sakr
How do politeness strategies and cultural aspects affect robot acceptance and anthropomorphization across native speakers of English and Arabic? Previous work in cross-cultural HRI studies has mostly focused on Western and East Asian cultures. In contrast, Middle Eastern attitudes and perceptions of robot assistants are a barely researched topic. We investigated culture-specific determinants of robot acceptance and anthropomorphization by conducting a between-subjects study in Qatar. A total of 92 native speakers of either English or Arabic interacted with a receptionist robot in two different interaction tasks. We further manipulated the robot’s verbal behavior in experimental sub-groups to explore different politeness strategies. Our results suggest that Arab participants perceived the robot more positively and anthropomorphized it more than English speaking participants. In addition, the use of positive politeness strategies and the change of interaction task had an effect on participants’ HRI experience. Our findings complement the existing body of cross-cultural HRI research with a Middle Eastern perspective that will help to inform the design of robots intended for use in cross-cultural, multi-lingual settings.
robot and human interactive communication | 2010
Maha Salem; Stefan Kopp; Ipke Wachsmuth; Frank Joublin
The generation of communicative, speech-accompanying robot gesture is still largely unexplored. We present an approach to enable the humanoid robot ASIMO to flexibly produce speech and co-verbal gestures at run-time, while not being limited to a pre-defined repertoire of motor actions. Since much research has already been dedicated to this challenge within the domain of virtual conversational agents, we build upon the experience gained from the development of a speech and gesture production model used for the virtual human Max. We propose a robot control architecture building upon the Articulated Communicator Engine (ACE) that was developed to allow virtual agents to flexibly realize planned multi-modal behavior representations on the spot. Our approach tightly couples ACE with ASIMOs perceptuo-motor system, combining conceptual representation and planning with motor control primitives for speech and arm movements of a physical robot body. First results of both gesture production and speech synthesis using ACE and the MARY text-to-speech system are presented and discussed.
Human Centered Robot Systems: Cognition, Interaction, Technology | 2009
Maha Salem; Stefan Kopp; Ipke Wachsmuth; Frank Joublin
Humanoid robot companions that are intended to engage in natural and fluent human-robot interaction are supposed to combine speech with non-verbal modalities for comprehensible and believable behavior. We present an approach to enable the humanoid robot ASIMO to flexibly produce and synchronize speech and co-verbal gestures at run-time, while not being limited to a predefined repertoire of motor action. Since this research challenge has already been tackled in various ways within the domain of virtual conversational agents, we build upon the experience gained from the development of a speech and gesture production model used for our virtual human Max. Being one of the most sophisticated multi-modal schedulers, the Articulated Communicator Engine (ACE) has replaced the use of lexicons of canned behaviors with an on-the-spot production of flexibly planned behavior representations. As an underlying action generation architecture, we explain how ACE draws upon a tight, bi-directional coupling of ASIMO’s perceptuo-motor system with multi-modal scheduling via both efferent control signals and afferent feedback.
international conference on social robotics | 2015
Maha Salem; Gabriella Lakatos; Farshid Amirabdollahian; Kerstin Dautenhahn
Maha Salem, Gabriella Lakatos, Farshid Amirabdollahian, K. Dautenhahn, ‘Towards Safe and Trustworthy Social Robots: Ethical Challenges and Practical Issues’, paper presented at the 7th International Conference on Social Robotics, Paris, France, 26-30 October, 2015.
intelligent robots and systems | 2010
Maha Salem; Stefan Kopp; Ipke Wachsmuth; Frank Joublin
One of the crucial aspects in building sociable, communicative robots is to endow them with expressive nonverbal behaviors. Gesture is one such behavior, frequently used by human speakers to illustrate what they express in speech. The production of gestures, however, poses a number of challenges with regard to motor control for arbitrary, expressive hand-arm movement and its coordination with other interaction modalities. We describe an approach to enable the humanoid robot ASIMO to flexibly produce communicative gestures at run-time, building upon the Articulated Communicator Engine (ACE) that was developed to allow virtual agents to realize planned behavior representations on the spot. We present a control architecture that tightly couples ACE with ASIMOs perceptuo-motor system for multi-modal scheduling. In this way, we combine conceptual representation and planning with motor control primitives for meaningful arm movements of a physical robot body. First results of realized gesture representations are presented and discussed