Mohammad Obaid
University of Canterbury
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Mohammad Obaid.
international conference on social robotics | 2012
Mohammad Obaid; Markus Häring; Felix Kistler; René Bühling; Elisabeth André
This paper presents a study that allows users to define intuitive gestures to navigate a humanoid robot. For eleven navigational commands, 385 gestures, performed by 35 participants, were analyzed. The results of the study reveal user-defined gesture sets for both novice users and expert users. In addition, we present, a taxonomy of the user-defined gesture sets, agreement scores for the gesture sets, time performances of the gesture motions, and present implications to the design of the robot control, with a focus on recognition and user interfaces.
international conference on 3d web technology | 2011
Radoslaw Niewiadomski; Mohammad Obaid; Elisabetta Bevacqua; Julian Looser; Le Quoc Anh; Catherine Pelachaud
We have developed a general purpose use and modular architecture of an embodied conversational agent (ECA). Our agent is able to communicate using verbal and nonverbal channels like gaze, facial expressions, and gestures. Our architecture follows the SAIBA framework that sets 3-step process and communication protocols. In our implementation of SAIBA architecture we focus on flexibility and we introduce different levels of the customization. In particular, our system is able to display the same communicative intention with different embodiments, be a virtual agent or a robot. Moreover our framework is independent of the animation player technology. Agent animations can be displayed across different medias, such as web browser, virtual or augmented reality. In this paper we present our agent architecture and its main features.
computer graphics, imaging and visualization | 2009
Mohammad Obaid; Ramakrishnan Mukundan; Mark Billinghurst; Mark Sagar
In this paper we propose a novel approach for representing facial expressions based on a quadratic deformation model applied to muscle regions. The non-linear nature of muscle deformations can be captured for each expression, by subdividing the face into 16 facial regions and using the most general rubber-sheet transformation of second degree. The deformation parameters are derived using a least-square minimization technique, and used to construct a Facial Deformation Table (FDT) to mathematically represent each expression. The generalized nature of the transformations allows us to easily map expressions from one model to another, and employ the FDTs in facial expression applications such as facial recognition and animations. The paper presents experimental results using the smile expression.
computer graphics, imaging and visualization | 2010
Mohammad Obaid; Ramakrishnan Mukundan; Mark Billinghurst; Catherine Pelachaud
In this paper we propose an approach compliant with the MPEG-4 standard to synthesize and control facial expressions generated using 3D facial models. This is achieved by establishing the MPEG-4 facial animation standard conformity with the quadratic deformation model representations of facial expressions. This conformity allows us to utilize the MPEG-4 facial animation parameters (FAPs) with the quadratic deformation tables, as a higher layer, to compute the FAP values. The FAP values for an expression E are computed by performing a linear mapping between a set of transformed MPEG-4 FAP points (using quadratic deformation models) and the 3D facial model semantics. The nature of the quadratic deformation model representations of facial expressions can be employed to synthesize and control the six main expressions (smile, sad, fear, surprise, anger, and disgust). Using Whissels psychological studies on emotions we compute an interpolation parameter that is used to synthesize intermediate facial expressions. The paper presents results of experimental studies performed using the Greta embodied conversational agent. The achieved results are promising and can lead to future research in synthesizing a wider range of facial expressions.
advances in computer entertainment technology | 2009
Mohammad Obaid; Daniel Lond; Ramakrishnan Mukundan; Mark Billinghurst
In this paper we propose a novel approach for generating expressive caricatures from an input image. The novelty of this work comes from combining an Active Appearance Model facial feature extraction system with a quadratic deformation model representation of facial expressions. The extracted features are deformed using the quadratic deformation parameters, resulting in an expressive caricature. The facial feature extraction requires an offline training process which uses natural expression annotated images from 30 model subjects, selected randomly from the Cohn-Kanade Database. The results show that from an input facial image, expressive caricatures are generated for the main six face expressions (smile, sad, fear, surprise, disgust, and anger). The proposed approach yields to promising expressive caricatures, and could lead to future research directions in the field of non-photorealistic rendering. In addition, the proposed approach can be employed in entertaining standalone applications or caricature animations.
international conference on computer graphics imaging and visualisation | 2006
Mohammad Obaid; Ramakrishnan Mukundan; Tim Bell
Moment functions have been used recently to compute stroke parameters for painterly rendering applications. The technique is based on the estimation of geometric features of the intensity distribution in small windowed images to obtain the brush size, colour and direction. This paper proposes an improvement of this method, by additionally extracting the connected components so that adjacent regions of similar colour are grouped for generating large and noticeable brush stroke images. An iterative coarse-to-fine rendering algorithm is used for painting regions of varying colour frequencies. Performance improvements over the existing technique are discussed with several examples
augmented human international conference | 2013
Ionut Damian; Mohammad Obaid; Felix Kistler; Elisabeth André
In the paper, we propose an approach that immerses the human user in an Augmented Reality (AR) environment with the use of an inertial motion capturing suit and a Head Mounted Displays system. The proposed approach allows for full body interaction with the AR environment in real-time and it does not require the use of any markers or cameras.
intelligent virtual agents | 2011
Mohammad Obaid; Radoslaw Niewiadomski; Catherine Pelachaud
This paper focuses on the users perception of virtual agents embedded in real and virtual worlds. In particular, we analyze the perception of spatial relations and the perception of coexistence. For this purpose, we measure the users voice compensation which is one of the human automatic behaviors to their surrounding environment. The results of our evaluation study reveal that people compensate their voice according to the distance during the interaction with both augmented reality (AR) and virtual reality (VR) based agents. Secondly, in AR-based scenario users perceive stronger the distance between them and the virtual agent. On the other hand, the results do not show any significant differences regarding the notion of coexistence of the user. Finally, we discuss our results in the context of sense of presence in interaction with virtual agent in AR applications.
digital image computing: techniques and applications | 2009
Mohammad Obaid; Ramakrishnan Mukundan; Roland Goecke; Mark Billinghurst; Hartmut Seichter
In this paper, we propose a novel approach for recognizing facial expressions based on using an Active Appearance Model facial feature tracking system with the quadratic deformation model representations of facial expressions. Thirty seven Facial Feature points are tracked based on the MPEG-4 Facial Animation Parameters layout. The proposed approach relies on the Euclidean distance measures between the tracked feature points and the reference deformed facial feature points of the six main expressions (smile, sad, fear, disgust, surprise, and anger). An evaluation of 30 model subjects, selected randomly from the Cohn-Kanade Database, was carried out. Results show that the main six facial expressions can successfully be recognized with an overall recognition accuracy of 89%. The proposed approach yields to promising recognition rates and can be used in real time applications.
human-robot interaction | 2014
Mohammad Obaid; Dieta Kuchenbrandt; Christoph Bartneck
Empathy plays an important role in the interaction between humans and robots. The contagious effect of yawning is moderated by the degree of social closeness and empathy. We propose to analyse the contagion of yawns as an indicator for empathy. We conducted pilot studies to test different experimental procedures for this purpose. We hope to be able to report on experimental results in the near future.Categories and Subject DescriptorsH.1.2 [Models and Principles]: User/Machine Systems— human factors; J.4 [Social and Behavioral Sciences]: Psychology