Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ali-Akbar Samadani is active.

Publication


Featured researches published by Ali-Akbar Samadani.


IEEE Transactions on Affective Computing | 2013

Body Movements for Affective Expression: A Survey of Automatic Recognition and Generation

Michelle Karg; Ali-Akbar Samadani; Rob Gorbet; Kolja Kühnlenz; Jesse Hoey; Dana Kulic

Body movements communicate affective expressions and, in recent years, computational models have been developed to recognize affective expressions from body movements or to generate movements for virtual agents or robots which convey affective expressions. This survey summarizes the state of the art on automatic recognition and generation of such movements. For both automatic recognition and generation, important aspects such as the movements analyzed, the affective state representation used, and the use of notation systems is discussed. The survey concludes with an outline of open problems and directions for future work.


affective computing and intelligent interaction | 2013

Laban Effort and Shape Analysis of Affective Hand and Arm Movements

Ali-Akbar Samadani; Sarahjane Burton; Rob Gorbet; Dana Kulic

The Laban Effort and Shape components provide a systematic tool for a compact and informative description of the dynamic qualities of movements. To enable the application of Laban notation in computational movement analysis, measurable physical correlates of Effort and Shape components need to be identified. Such physical correlates enable quantification of Effort and Shape components, which in turn facilitates computational analysis of affective movements. In this work, two existing approaches to quantification of Laban Effort components (Weight, Time, Space, and Flow) based on measurable movement features (position, velocity, acceleration, and jerk) are adapted for hand and arm movements, and a new approach for quantifying Shape Directional based on the average trajectory curvature is proposed. The results show a high correlation between Laban annotations provided by a certified movement analyst (CMA) and the quantified Effort Weight (81%), Time (77%) and Shape Directional (93%) for an affective hand and arm movement dataset.


international conference of the ieee engineering in medicine and biology society | 2014

Hand Gesture Recognition Based on Surface Electromyography

Ali-Akbar Samadani; Dana Kulic

Human hands are the most dexterous of human limbs and hand gestures play an important role in non-verbal communication. Underlying electromyograms associated with hand gestures provide a wealth of information based on which varying hand gestures can be recognized. This paper develops an inter-individual hand gesture recognition model based on Hidden Markov models that receives surface electromyography (sEMG) signals as inputs and predicts a corresponding hand gesture. The developed recognition model is tested with a dataset of 10 various hand gestures performed by 25 subjects in a leave-one-subject-out cross validation and an inter-individual recognition rate of 79% was achieved. The promising recognition rate demonstrates the efficacy of the proposed approach for discriminating between gesture-specific sEMG signals and could inform the design of sEMG-controlled prostheses and assistive devices.


Pattern Recognition Letters | 2013

Discriminative functional analysis of human movements

Ali-Akbar Samadani; Ali Ghodsi; Dana Kulic

This paper investigates the use of statistical dimensionality reduction (DR) techniques for discriminative low dimensional embedding to enable affective movement recognition. Human movements are defined by a collection of sequential observations (time-series features) representing body joint angle or joint Cartesian trajectories. In this work, these sequential observations are modelled as temporal functions using B-spline basis function expansion, and dimensionality reduction techniques are adapted to enable application to the functional observations. The DR techniques adapted here are: Fischer discriminant analysis (FDA), supervised principal component analysis (PCA), and Isomap. These functional DR techniques along with functional PCA are applied on affective human movement datasets and their performance is evaluated using leave-one-out cross validation with a one-nearest neighbour classifier in the corresponding low-dimensional subspaces. The results show that functional supervised PCA outperforms the other DR techniques examined in terms of classification accuracy and time resource requirements.


International Journal of Social Robotics | 2013

Perception and Generation of Affective Hand Movements

Ali-Akbar Samadani; Eric Kubica; Rob Gorbet; Dana Kulic

Perception and generation of affective movements are essential for achieving the expressivity required for a fully engaging human-machine interaction. This paper develops a computational model for recognizing and generating affective hand movements for display on anthropomorphic and non-anthropomorphic structures. First, time-series features of these movements are aligned and converted to fixed-length vectors using piece-wise linear re-sampling. Next, a feature transformation best capable of discriminating between the affective movements is obtained using functional principal component analysis (FPCA). The resulting low-dimensional feature transformation is used for classification and regeneration. A dataset consisting of one movement type, closing and opening the hand, is considered for this study. Three different expressions, sadness, happiness and anger, were conveyed by a demonstrator through the same general movement. The performance of the developed model is evaluated objectively using leave-one-out cross validation and subjectively through a user study, where participants evaluated the regenerated affective movements as well as the original affective movements reproduced both on a human-like model and a non-anthropomorphic structure. The proposed approach achieves zero leave-one-out cross validation errors, on both the training and testing sets. No significant difference is observed between participants’ evaluation of the regenerated movements as compared to the original movement, which confirms successful regeneration of the affective movement. Furthermore, a significant effect of structure on the perception of affective movements is observed.


IEEE Transactions on Human-Machine Systems | 2014

Affective Movement Recognition Based on Generative and Discriminative Stochastic Dynamic Models

Ali-Akbar Samadani; Rob Gorbet; Dana Kulic

For an engaging human-machine interaction, machines need to be equipped with affective communication abilities. Such abilities enable interactive machines to recognize the affective expressions of their users, and respond appropriately through different modalities including movement. This paper focuses on bodily expressions of affect, and presents a new computational model for affective movement recognition, robust to kinematic, interpersonal, and stochastic variations in affective movements. The proposed approach derives a stochastic model of the affective movement dynamics using hidden Markov models (HMMs). The resulting HMMs are then used to derive a Fisher score representation of the movements, which is subsequently used to optimize affective movement recognition using support vector machine classification. In addition, this paper presents an approach to obtain a minimal discriminative representation of the movements using supervised principal component analysis (SPCA) that is based on Hilbert-Schmidt independence criterion in the Fisher score space. The dimensions of the resulting SPCA subspace consist of intrinsic movement features salient to affective movement recognition. These salient features enable a low-dimensional encoding of observed movements during a human-machine interaction, which can be used to recognize and analyze human affect that is displayed through movement. The efficacy of the proposed approach in recognizing affective movements and identifying a minimal discriminative movement representation is demonstrated using two challenging affective movement datasets.


Dance Notations and Robot Motion | 2016

Laban Movement Analysis and Affective Movement Generation for Robots and Other Near-Living Creatures

Sarah Jane Burton; Ali-Akbar Samadani; Rob Gorbet; Dana Kulic

This manuscript describes an approach, based on Laban Movement Analysis, to generate compact and informative representations of movement to facilitate affective movement recognition and generation for robots and other artificial embodiments. We hypothesize that Laban Movement Analysis, which is a comprehensive and systematic approach for describing movement, is an excellent candidate for deriving a low-dimensional representation of movement which facilitates affective motion modeling. First, we review the dimensions of Laban Movement Analysis most relevant for capturing movement expressivity and propose an approach to compute an estimate of the Shape and Effort components of Laban Movement Analysis using data obtained from motion capture. Within a motion capture environment, a professional actor reproduced prescribed motions, imbuing them with different emotions. The proposed approach was compared with a Laban coding by a certified movement analyst (CMA). The results show a strong correlation between results from the automatic Laban quantification and the CMA-generated Laban quantification of the movements. Based on these results, we describe an approach for the automatic generation of affective movements, by adapting pre-defined motion paths to overlay affective content. The proposed framework is validated through cross-validation and perceptual user studies. The proposed approach has great potential for application in fields including robotics, interactive art, animation and dance/acting training.


international conference of the ieee engineering in medicine and biology society | 2012

The effect of aging on human brain spatial processing performance

Ali-Akbar Samadani; Zahra Moussavi

Cognitive abilities similar to other mental or physical abilities develop and change during the course of life. Spatial processing is a cognitive ability, which has been suggested to deteriorate by age [1]. The focus of this paper is to study the change of real time egocentric spatial processing in humans by age. For this, an interactive computer game played with a 2DOF manipulandum was designed. The game consists of goal-oriented motor tasks, in which the player must move the robot arm toward a desired spatial cue in a virtual 2D environment in every trial for a total number of 72 trials. The spatial cues are four final destinations and a starting location, which change their position in every new trial of the game. 37 individuals with no cognitive impairment were split into 3 groups according to different age range (children, young, and elderly) and participated in this experiment. Their spatial processing ability was assessed and compared between the age groups. The results show the children (7-12 y) and elderly (65+ y) performed very similar and significantly worse than the young adult participants.


Neuroscience Research | 2016

An online three-class Transcranial Doppler ultrasound brain computer interface

Anuja Goyal; Ali-Akbar Samadani; Anne-Marie Guerguerian; Tom Chau

Brain computer interfaces (BCI) can provide communication opportunities for individuals with severe motor disabilities. Transcranial Doppler ultrasound (TCD) measures cerebral blood flow velocities and can be used to develop a BCI. A previously implemented TCD BCI system used verbal and spatial tasks as control signals; however, the spatial task involved a visual cue that awkwardly diverted the users attention away from the communication interface. Therefore, vision-independent right-lateralized tasks were investigated. Using a bilateral TCD BCI, ten participants controlled online, an on-screen keyboard using a left-lateralized task (verbal fluency), a right-lateralized task (fist motor imagery or 3D-shape tracing), and unconstrained rest. 3D-shape tracing was generally more discernible from other tasks than was fist motor imagery. Verbal fluency, 3D-shape tracing and unconstrained rest were distinguished from each other using a linear discriminant classifier, achieving a mean agreement of κ=0.43±0.17. These rates are comparable to the best offline three-class TCD BCI accuracies reported thus far. The online communication system achieved a mean information transfer rate (ITR) of 1.08±0.69bits/min with values reaching up to 2.46bits/min, thereby exceeding the ITR of previous online TCD BCIs. These findings demonstrate the potential of a three-class online TCD BCI that does not require visual task cues.


Archive | 2016

Segmentation by Data Point Classification Applied to Forearm Surface EMG

Jonathan Feng-Shun Lin; Ali-Akbar Samadani; Dana Kulic

Recent advances in wearable technologies have led to the development of new modalities for human-machine interaction such as gesture-based interaction via surface electromyograph (EMG). An important challenge when performing EMG gesture recognition is to temporally segment the individual gestures from continuously recorded time-series data. This paper proposes an approach for EMG data segmentation, by formulating the segmentation problem as a classification task, where a classifier is used to label each data point as either a segment point or a non-segment point. The proposed EMG segmentation approach is used to recognize 9 hand gestures from forearm EMG data of 10 participants and a balanced accuracy of 83 % is achieved.

Collaboration


Dive into the Ali-Akbar Samadani's collaboration.

Top Co-Authors

Avatar

Dana Kulic

University of Waterloo

View shared research outputs
Top Co-Authors

Avatar

Rob Gorbet

University of Waterloo

View shared research outputs
Top Co-Authors

Avatar

Tom Chau

University of Toronto

View shared research outputs
Top Co-Authors

Avatar

Eric Kubica

University of Waterloo

View shared research outputs
Top Co-Authors

Avatar

Nicole Proulx

Holland Bloorview Kids Rehabilitation Hospital

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ali Ghodsi

University of Waterloo

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Anuja Goyal

Holland Bloorview Kids Rehabilitation Hospital

View shared research outputs
Researchain Logo
Decentralizing Knowledge