Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ali Arya is active.

Publication


Featured researches published by Ali Arya.


Computer Animation and Virtual Worlds | 2006

Facial actions as visual cues for personality

Ali Arya; Lisa N. Jefferies; James T. Enns; Steve DiPaola

What visual cues do human viewers use to assign personality characteristics to animated characters? While most facial animation systems associate facial actions to limited emotional states or speech content, the present paper explores the above question by relating the perception of personality to a wide variety of facial actions (e.g., head tilting/turning, and eyebrow raising) and emotional expressions (e.g., smiles and frowns). Animated characters exhibiting these actions and expressions were presented to human viewers in brief videos. Human viewers rated the personalities of these characters using a well‐standardized adjective rating system borrowed from the psychological literature. These personality descriptors are organized in a multidimensional space that is based on the orthogonal dimensions of desire for affiliation and displays of social dominance. The main result of the personality rating data was that human viewers associated individual facial actions and emotional expressions with specific personality characteristics very reliably. In particular, dynamic facial actions such as head tilting and gaze aversion tended to spread ratings along the dominance dimension, whereas facial expressions of contempt and smiling tended to spread ratings along the affiliation dimension. Furthermore, increasing the frequency and intensity of the head actions increased the perceived social dominance of the characters. We interpret these results as pointing to a reliable link between animated facial actions/expressions and the personality attributions they evoke in human viewers. The paper shows how these findings are used in our facial animation system to create perceptually valid personality profiles based on dominance and affiliation as two parameters that control the facial actions of autonomous animated characters. Copyright


computer games | 2009

Perceptually Valid Facial Expressions for Character-Based Applications

Ali Arya; Steve DiPaola; Avi Parush

This paper addresses the problem of creating facial expression of mixed emotions in a perceptually valid way. The research has been done in the context of a “game-like” health and education applications aimed at studying social competency and facial expression awareness in autistic children as well as native language learning, but the results can be applied to many other applications such as games with need for dynamic facial expressions or tools for automating the creation of facial animations. Most existing methods for creating facial expressions of mixed emotions use operations like averaging to create the combined effect of two universal emotions. Such methods may be mathematically justifiable but are not necessarily valid from a perceptual point of view. The research reported here starts by user experiments aiming at understanding how people combine facial actions to express mixed emotions, and how the viewers perceive a set of facial actions in terms of underlying emotions. Using the results of these experiments and a three-dimensional emotion model, we associate facial actions to dimensions and regions in the emotion space, and create a facial expression based on the location of the mixed emotion in the three-dimensional space. We call these regionalized facial actions “facial expression units.”


Neurocomputing | 2014

Classification and translation of style and affect in human motion using RBF neural networks

S. Ali Etemad; Ali Arya

Human motion can be carried out with a variety of different affects or styles such as happy, sad, energetic, and tired among many others. Modeling and classifying these styles, and more importantly, translating them from one sequence onto another has become a popular problem in the fields of graphics, multimedia, and human computer interaction. In this paper, radial basis functions (RBF) are used to model and extract stylistic and affective features from motion data. We demonstrate that using only a few basis functions per degree of freedom, successful modeling of styles in cycles of human walk can be achieved. Furthermore, we employ an ensemble of RBF neural networks to learn the affective/stylistic features following time warping and principal component analysis. The system learns the components and classifies stylistic motion sequences into distinct affective and stylistic classes. The system also utilizes the ensemble of neural networks to learn motion affects and styles such that it can translate them onto neutral input sequences. Experimental results along with both numerical and perceptual validations confirm the highly accurate and effective performance of the system.


international conference on computer graphics and interactive techniques | 2006

Emotional remapping of music to facial animation

Steve DiPaola; Ali Arya

We propose a method to extract the emotional data from a piece of music and then use that data via a remapping algorithm to automatically animate an emotional 3D face sequence. The method is based on studies of the emotional aspect of music and our parametric-based behavioral head model for face animation. We address the issue of affective communication remapping in general, i.e. translation of affective content (eg. emotions, and mood) from one communication form to another. We report on the results of our MusicFace system, which use these techniques to automatically create emotional facial animations from multi-instrument polyphonic music scores in MIDI format and a remapping rule set.


International Journal of Game-Based Learning (IJGBL) | 2013

An International Study on Learning and Process Choices in the Global Game Jam.

Ali Arya; Jeff Chastine; Jon A. Preston; Allan Fowler

This paper reports the results of an online survey done by Global Game Jam (GGJ) participants in January 2012. This is an expansion of an earlier survey of a local game jam event and seeks to validate and extend previous studies. The objectives of this survey were collecting demographic information about the GGJ participants, understanding their motivations, studying the effectiveness of GGJ as a learning and community-building experience, and understanding the process used by GGJ participants to make a computer game in extremely limited time. The survey was done in two phases: pre-jam and post-jam. Collectively, the information in this survey can be used to (1) plan different learning experiences, (2) revise the development process for professional and academic projects, and (3) provide additional elements to game jams or change their structures based on the participants’ comments to make them more fruitful. An International Study on Learning and Process Choices in the Global Game Jam


Eurasip Journal on Image and Video Processing | 2007

Multispace behavioral model for face-based affective social agents

Ali Arya; Steve DiPaola

This paper describes a behavioral model for affective social agents based on three independent but interacting parameter spaces: knowledge, personality, and mood. These spaces control a lower-level geometry space that provides parameters at the facial feature level. Personality and mood use findings in behavioral psychology to relate the perception of personality types and emotional states to the facial actions and expressions through two-dimensional models for personality and emotion. Knowledge encapsulates the tasks to be performed and the decision-making process using a specially designed XML-based language. While the geometry space provides an MPEG-4 compatible set of parameters for low-level control, the behavioral extensions available through the triple spaces provide flexible means of designing complicated personality types, facial expression, and dynamic interactive scenarios.


asia-pacific computer and human interaction | 2012

Empirical study of a vision-based depth-sensitive human-computer interaction system

Reza GhasemAghaei; Ali Arya

This paper proposes the results of a user study on vision-based depth-sensitive input system for performing typical desktop tasks through arm gestures. We have developed a vision-based HCI prototype to be used for our comprehensive usability study. Using the Kinect 3D camera and OpenNI software library we implemented our system with high stability and efficiency by decreasing the ambient disturbing factors such as noise or light condition dependency. In our prototype, we designed a capable algorithm using NITE toolkit to recognize arm gestures. Finally, through a comprehensive user experiment we compared our natural arm gestures to the conventional input devices (mouse/keyboard), for simple and complicated tasks, and in two different situations (small and big-screen displays) for precision, efficiency, ease-of-use, pleasantness, fatigue, naturalness, and overall satisfaction to verify the following hypothesis: on a WIMP user interface, the gesture-based input is superior to mouse/keyboard when using big-screen. Our empirical investigation also proves that gestures are more natural and pleasant to be used than mouse/keyboard. However, arm gestures can cause more fatigue than mouse.


international conference on human computer interaction | 2013

Design and usability analysis of gesture-based control for common desktop tasks

S. Ali Etemad; Ali Arya

We have designed and implemented a vision-based system capable of interacting with users natural arm and finger gestures. Using depth-based vision has reduced the effect of ambient disturbances such as noise and lighting condition. Various arm and finger gestures are designed and a system capable of detection and classification of gestures is developed and implemented. Finally the gesture recognition routine is linked to a simplified desktop for usability and human factor studies. Several factors such as precision, efficiency, ease-of-use, pleasure, fatigue, naturalness, and overall satisfaction are investigated in detail. Through different simple and complex tasks, it is concluded that finger-based inputs are superior to arm-based ones in the long run. Furthermore, it is shown that arm gestures cause more fatigue and appear less natural than finger gestures. However, factors such as time, overall satisfaction, and easiness were not affected by selecting one over the other.


IEEE Transactions on Multimedia | 2007

Face Modeling and Animation Language for MPEG-4 XMT Framework

Ali Arya; Steve DiPaola

This paper proposes FML, an XML-based face modeling and animation language. FML provides a structured content description method for multimedia presentations based on face animation. The language can be used as direct input to compatible players, or be compiled within MPEG-4 XMT framework to create MPEG-4 presentations. The language allows parallel and sequential action description, decision-making and dynamic event-based scenarios, model configuration, and behavioral template definition. Facial actions include talking, expressions, head movements, and low-level MPEG-4 FAPs. The ShowFace and iFACE animation frameworks are also reviewed as example FML-based animation systems.


Archive | 2016

Gamification of Exercise and Fitness using Wearable Activity Trackers

Zhao Zhao; S. Ali Etemad; Ali Arya

Wearable technologies are a growing industry with significant potential in different aspects of health and fitness. Gamification of health and fitness, on the other hand, has recently become a popular field of research. Accordingly, we believe that wearable devices have the potential to be utilized towards gamification of fitness and exercise. In this paper, we first review several popular activity tracking wearable devices, their characteristics and specifications, and their application programming interface (API) capabilities and availabilities, which will enable them to be employed by third party developers for the purpose at hand. The feasibility and potential advantages of utilizing wearables for gamification of health and fitness are then discussed. Finally, we develop a pilot prototype as a case-study for this concept, and perform preliminary user studies which will help further explore the proposed concept.

Collaboration


Dive into the Ali Arya's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Babak Hamidzadeh

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge