Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mohammad H. Mahoor is active.

Publication


Featured researches published by Mohammad H. Mahoor.


IEEE Transactions on Affective Computing | 2013

DISFA: A Spontaneous Facial Action Intensity Database

Seyed Mohammad Mavadati; Mohammad H. Mahoor; Kevin Bartlett; Philip Trinh; Jeffrey F. Cohn

Access to well-labeled recordings of facial expression is critical to progress in automated facial expression recognition. With few exceptions, publicly available databases are limited to posed facial behavior that can differ markedly in conformation, intensity, and timing from what occurs spontaneously. To meet the need for publicly available corpora of well-labeled video, we collected, ground-truthed, and prepared for distribution the Denver intensity of spontaneous facial action database. Twenty-seven young adults were video recorded by a stereo camera while they viewed video clips intended to elicit spontaneous emotion expression. Each video frame was manually coded for presence, absence, and intensity of facial action units according to the facial action unit coding system. Action units are the smallest visibly discriminable changes in facial action; they may occur individually and in combinations to comprise more molar facial expressions. To provide a baseline for use in future research, protocols and benchmarks for automated action unit intensity measurement are reported. Details are given for accessing the database for research in computer vision, machine learning, and affective and behavioral science.


computer vision and pattern recognition | 2009

A framework for automated measurement of the intensity of non-posed Facial Action Units

Mohammad H. Mahoor; Steven Cadavid; Daniel S. Messinger; Jeffrey F. Cohn

This paper presents a framework to automatically measure the intensity of naturally occurring facial actions. Naturalistic expressions are non-posed spontaneous actions. The facial action coding system (FACS) is the gold standard technique for describing facial expressions, which are parsed as comprehensive, nonoverlapping action units (Aus). AUs have intensities ranging from absent to maximal on a six-point metric (i.e., 0 to 5). Despite the efforts in recognizing the presence of non-posed action units, measuring their intensity has not been studied comprehensively. In this paper, we develop a framework to measure the intensity of AU12 (lip corner puller) and AU6 (cheek raising) in videos captured from infant-mother live face-to-face communications. The AU12 and AU6 are the most challenging case of infants expressions (e.g., low facial texture in infants face). One of the problems in facial image analysis is the large dimensionality of the visual data. Our approach for solving this problem is to utilize the spectral regression technique to project high dimensionality facial images into a low dimensionality space. Represented facial images in the low dimensional space are utilized to train support vector machine classifiers to predict the intensity of action units. Analysis of 18 minutes of captured video of non-posed facial expressions of several infants and mothers shows significant agreement between a human FACS coder and our approach, which makes it an efficient approach for automated measurement of the intensity of non-posed facial action units.


workshop on applications of computer vision | 2016

Going deeper in facial expression recognition using deep neural networks

Ali Mollahosseini; David M. Chan; Mohammad H. Mahoor

Automated Facial Expression Recognition (FER) has remained a challenging and interesting problem in computer vision. Despite efforts made in developing various methods for FER, existing approaches lack generalizability when applied to unseen images or those that are captured in wild setting (i.e. the results are not significant). Most of the existing approaches are based on engineered features (e.g. HOG, LBPH, and Gabor) where the classifiers hyper-parameters are tuned to give best recognition accuracies across a single database, or a small collection of similar databases. This paper proposes a deep neural network architecture to address the FER problem across multiple well-known standard face datasets. Specifically, our network consists of two convolutional layers each followed by max pooling and then four Inception layers. The network is a single component architecture that takes registered facial images as the input and classifies them into either of the six basic or the neutral expressions. We conducted comprehensive experiments on seven publicly available facial expression databases, viz. MultiPIE, MMI, CK+, DISFA, FERA, SFEW, and FER2013. The results of our proposed architecture are comparable to or better than the state-of-the-art methods and better than traditional convolutional neural networks in both accuracy and training time.


Pattern Recognition | 2014

Human activity recognition using multi-features and multiple kernel learning

Salah Althloothi; Mohammad H. Mahoor; Xiao Zhang; Richard M. Voyles

This paper presents two sets of features, shape representation and kinematic structure, for human activity recognition using a sequence of RGB-D images. The shape features are extracted using the depth information in the frequency domain via spherical harmonics representation. The other features include the motion of the 3D joint positions (i.e. the end points of the distal limb segments) in the human body. Both sets of features are fused using the Multiple Kernel Learning (MKL) technique at the kernel level for human activity recognition. Our experiments on three publicly available datasets demonstrate that the proposed features are robust for human activity recognition and particularly when there are similarities among the actions.


Infancy | 2009

Automated Measurement of Facial Expression in Infant-Mother Interaction: A Pilot Study.

Daniel S. Messinger; Mohammad H. Mahoor; Sy-Miin Chow; Jeffrey F. Cohn

Automated facial measurement using computer vision has the potential to objectively document continuous changes in behavior. To examine emotional expression and communication, we used automated measurements to quantify smile strength, eye constriction, and mouth opening in two six-month-old/mother dyads who each engaged in a face-to-face interaction. Automated measurements showed high associations with anatomically based manual coding (concurrent validity); measurements of smiling showed high associations with mean ratings of positive emotion made by naive observers (construct validity). For both infants and mothers, smile strength and eye constriction (the Duchenne marker) were correlated over time, creating a continuous index of smile intensity. Infant and mother smile activity exhibited changing (nonstationary) local patterns of association, suggesting the dyadic repair and dissolution of states of affective synchrony. The study provides insights into the potential and limitations of automated measurement of facial action.


Pattern Recognition | 2005

Classification and numbering of teeth in dental bitewing images

Mohammad H. Mahoor; Mohamed Abdel-Mottaleb

We present an algorithm to classify and assign numbers to teeth in bitewing dental images. The goal is to use the result of this algorithm in an automated dental identification system. We use Bayesian classification to classify the teeth in a bitewing image into molars and premolars and assign an absolute number to each tooth based on the common numbering system used in dentistry. Fourier descriptors of the teeth contours are used as features in the Bayesian classification. After the Bayesian classification, the spatial relation between the two types of teeth is considered to number each tooth and correct the misclassification of some teeth in order to obtain high precision results. Comparison between the two kinds of FDs was done to select the best method for teeth classification. Experiments with 50 bitewing images containing more than 400 teeth show that our method is capable of classifying and assigning absolute index number to the teeth with high accuracy.


Face and Gesture 2011 | 2011

Facial action unit recognition with sparse representation

Mohammad H. Mahoor; Mu Zhou; Kevin L. Veon; S. Mohammad Mavadati; Jeffrey F. Cohn

This paper presents a novel framework for recognition of facial action unit (AU) combinations by viewing the classification as a sparse representation problem. Based on this framework, we represent a facial image exhibiting the combination of AUs as a sparse linear combination of basis constituting an overcomplete dictionary. We build an overcomplete dictionary whose main elements are mean Gabor features of AU combinations under examination. The other elements of the dictionary are randomly sampled from a distribution (e.g., Gaussian distribution) that guarantees sparse signal recovery. Afterwards, by solving L1-norm minimization, a facial image is represented as a sparse vector which is used to distinguish various AU patterns. After calculating the sparse representation, the classification problem is simply viewed as a rank maximal problem. The index of the maximal value of the sparse vector is regarded as the class label of the facial image under test. Extensive experiments on the Cohn-Kanade facial expressions database demonstrate that this sparse learning framework is promising for recognition of AU combinations.


Image and Vision Computing | 2014

Nonverbal Social Withdrawal in Depression: Evidence from manual and automatic analyses

Jeffrey M. Girard; Jeffrey F. Cohn; Mohammad H. Mahoor; S. Mohammad Mavadati; Zakia Hammal; Dean P. Rosenwald

The relationship between nonverbal behavior and severity of depression was investigated by following depressed participants over the course of treatment and video recording a series of clinical interviews. Facial expressions and head pose were analyzed from video using manual and automatic systems. Both systems were highly consistent for FACS action units (AUs) and showed similar effects for change over time in depression severity. When symptom severity was high, participants made fewer affiliative facial expressions (AUs 12 and 15) and more non-affiliative facial expressions (AU 14). Participants also exhibited diminished head motion (i.e., amplitude and velocity) when symptom severity was high. These results are consistent with the Social Withdrawal hypothesis: that depressed individuals use nonverbal behavior to maintain or increase interpersonal distance. As individuals recover, they send more signals indicating a willingness to affiliate. The finding that automatic facial expression analysis was both consistent with manual coding and revealed the same pattern of findings suggests that automatic facial expression analysis may be ready to relieve the burden of manual coding in behavioral and clinical science.


international conference on automatic face and gesture recognition | 2006

Facial features extraction in color images using enhanced active shape model

Mohammad H. Mahoor; Mohamed Abdel-Mottaleb

In this paper, we present an improved active shape model (ASM) for facial feature extraction. The original ASM method developed by Cootes et al. highly relies on the initialization and the representation of the local structure of the facial features in the image. We use color information to improve the ASM approach for facial feature extraction. The color information is used to localize the centers of the mouth and the eyes to assist the initialization step. Moreover, we model the local structure of the feature points in the RGB color space. Besides, we use 2D affine transformation to align facial features that are perturbed by head pose variations. In fact, the 2D affine transformation compensates for the effects of both head pose variations and the projection of 3D data to 2D. Experiments on a face database of 50 subjects show that our approach outperforms the standard ASM and is successful in facial feature extraction


Neurosurgery Clinics of North America | 2014

Creating the Feedback Loop: Closed-Loop Neurostimulation

Adam O. Hebb; Jun Jason Zhang; Mohammad H. Mahoor; Christos Tsiokos; Charles Matlack; Howard Jay Chizeck; Nader Pouratian

Current DBS therapy delivers a train of electrical pulses at set stimulation parameters. This open-loop design is effective for movement disorders, but therapy may be further optimized by a closed loop design. The technology to record biosignals has outpaced our understanding of their relationship to the clinical state of the whole person. Neuronal oscillations may represent or facilitate the cooperative functioning of brain ensembles, and may provide critical information to customize neuromodulation therapy. This review addresses advances to date, not of the technology per se, but of the strategies to apply neuronal signals to trigger or modulate stimulation systems.

Collaboration


Dive into the Mohammad H. Mahoor's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Adam O. Hebb

University of Washington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge