Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Amir Aly is active.

Publication


Featured researches published by Amir Aly.


human-robot interaction | 2013

A model for synthesizing a combined verbal and nonverbal behavior based on personality traits in human-robot interaction

Amir Aly; Adriana Tapus

Robots are more and more present in our daily life; they have to move into human-centered environments, to interact with humans, and to obey some social rules so as to produce an appropriate social behavior in accordance with humans profile (i.e., personality, state of mood, and preferences). Recent researches discussed the effect of personality traits on the verbal and nonverbal production, which plays a major role in transferring and understanding messages in a social interaction between a human and a robot. The characteristics of the generated gestures (e.g., amplitude, direction, rate, and speed) during the nonverbal communication can differ according to the personality trait, which, similarly, influences the verbal content of the human speech in terms of verbosity, repetitions, etc. Therefore, our research tries to map a humans verbal behavior to a corresponding combined robots verbal-nonverbal behavior based on the personality dimensions of the interacting human. The system estimates first the interacting humans personality traits through a psycholinguistic analysis of the spoken language, then it uses PERSONAGE natural language generator that tries to generate a corresponding verbal language to the estimated personality traits. Gestures are generated by using BEAT toolkit, which performs a linguistic and contextual analysis of the generated language relying on rules derived from extensive research into human conversational behavior. We explored the human-robot personality matching aspect and the differences of the adapted mixed robots behavior (gesture and speech) over the adapted speech only robots behavior in an interaction. Our model validated that individuals preferred more to interact with a robot that had the same personality with theirs and that an adapted mixed robots behavior (gesture and speech) was more engaging and effective than a speech only robots behavior. Our experiments were done with Nao robot.


Autonomous Robots | 2016

Towards an intelligent system for generating an adapted verbal and nonverbal combined behavior in human---robot interaction

Amir Aly; Adriana Tapus

In human–robot interaction scenarios, an intelligent robot should be able to synthesize an appropriate behavior adapted to human profile (i.e., personality). Recent research studies discussed the effect of personality traits on human verbal and nonverbal behaviors. The dynamic characteristics of the generated gestures and postures during the nonverbal communication can differ according to personality traits, which similarly can influence the verbal content of human speech. This research tries to map human verbal behavior to a corresponding verbal and nonverbal combined robot behavior based on the extraversion–introversion personality dimension. We explore the human–robot personality matching aspect and the similarity attraction principle, in addition to the different effects of the adapted combined robot behavior expressed through speech and gestures, and the adapted speech-only robot behavior, on interaction. Experiments with the humanoid NAO robot are reported.


ECMR | 2012

Speech to Head Gesture Mapping in Multimodal Human-Robot Interaction

Amir Aly; Adriana Tapus

In human-human interaction, para-verbal and non-verbal communication are naturally aligned and synchronized. The difficulty encountered during the coordination between speech and head gestures concerns the conveyed meaning, the way of performing the gesture with respect to speech characteristics, their relative temporal arrangement, and their coordinated organization in a phrasal structure of utterance. In this research, we focus on the mechanism of mapping head gestures and speech prosodic characteristics in a natural human-robot interaction. Prosody patterns and head gestures are aligned separately as a parallel multi-stream HMM model. The mapping between speech and head gestures is based on Coupled Hidden Markov Models (CHMMs), which could be seen as a collection of HMMs, one for the video stream and one for the audio stream. Experimental results with Nao robots are reported.


intelligent robots and systems | 2015

Multimodal adapted robot behavior synthesis within a narrative human-robot interaction

Amir Aly; Adriana Tapus

In human-human interaction, three modalities of communication (i.e., verbal, nonverbal, and paraverbal) are naturally coordinated so as to enhance the meaning of the conveyed message. In this paper, we try to create a similar coordination between these modalities of communication in order to make the robot behave as naturally as possible. The proposed system uses a group of videos in order to elicit specific target emotions in a human user, upon which interactive narratives will start (i.e., interactive discussions between the participant and the robot around each videos content). During each interaction experiment, the humanoid expressive ALICE robot engages and generates an adapted multimodal behavior to the emotional content of the projected video using speech, head-arm metaphoric gestures, and/or facial expressions. The interactive speech of the robot is synthesized using Mary-TTS (text to speech toolkit), which is used - in parallel - to generate adapted head-arm gestures [1]. This synthesized multimodal robot behavior is evaluated by the interacting human at the end of each emotion-eliciting experiment. The obtained results validate the positive effect of the generated robot behavior multimodality on interaction.


Intelligent Assistive Robots | 2015

An Online Fuzzy-Based Approach for Human Emotions Detection: An Overview on the Human Cognitive Model of Understanding and Generating Multimodal Actions

Amir Aly; Adriana Tapus

An intelligent robot needs to be able to understand human emotions, and to understand and generate actions through cognitive systems that operate in a similar way to human cognition. In this chapter, we mainly focus on developing an online incremental learning system of emotions using Takagi-Sugeno (TS) fuzzy model. Additionally, we present a general overview for understanding and generating multimodal actions from the cognitive point of view. The main objective of this system is to detect whether the observed emotion needs a new corresponding multimodal action to be generated in case it constitutes a new emotion cluster not learnt before, or it can be attributed to one of the existing actions in memory in case it belongs to an existing cluster.


collaboration technologies and systems | 2011

User adaptable robot behavior

Adriana Tapus; Amir Aly

A social intelligent robot should be capable of observing and understanding the changes in the environment so as to behave in a proper manner. It also needs to take into account user preferences, user disability level, and user profile. This paper presents a research work based on socially assistive robotics (SAR) technology that aims at providing affordable personalized physical and cognitive assistance, motivation, and companionship to users. The work described here tries to validate that a robotic system can adapt its behavior to the user profile.


human-robot interaction | 2011

Towards an online voice-based gender and internal state detection model

Amir Aly; Adriana Tapus

In human-robot interaction, gender and internal state detection play an important role in making the robot reacting in an appropriate manner. This research focuses on the important features to extract from a voice signal in order to construct successful gender and internal state detection systems, and shows the benefits of combining both systems together on the total average recognition score. Moreover, it consists a foundation on an ongoing approach to estimate the human internal state online via unsupervised clustering algorithms.


Interaction Studies | 2012

Children with autism social engagement in interaction with Nao, an imitative robot: A series of single case experiments

Adriana Tapus; Andreea Peca; Amir Aly; Cristina Pop; Lavinia Jisa; Sebastian Pintea; Alina Rusu; Daniel David


international conference on control, automation, robotics and vision | 2012

Towards an online fuzzy modeling for human internal states detection

Amir Aly; Adriana Tapus


Archive | 2010

Gestures Imitation with a Mobile Robot in the Context of Human-Robot Interaction (HRI) for Children with Autism

Amir Aly; Adriana Tapus

Collaboration


Dive into the Amir Aly's collaboration.

Top Co-Authors

Avatar

Adriana Tapus

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Adriana Tapus

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Daniel David

Icahn School of Medicine at Mount Sinai

View shared research outputs
Researchain Logo
Decentralizing Knowledge