Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Whitney I. Mattson is active.

Publication


Featured researches published by Whitney I. Mattson.


Emotion | 2012

The eyes have it: making positive expressions more positive and negative expressions more negative.

Daniel S. Messinger; Whitney I. Mattson; Mohammad H. Mahoor; Jeffrey F. Cohn

Facial expressions frequently involve multiple individual facial actions. How do facial actions combine to create emotionally meaningful expressions? Infants produce positive and negative facial expressions at a range of intensities. It may be that a given facial action can index the intensity of both positive (smiles) and negative (cry-face) expressions. Objective, automated measurements of facial action intensity were paired with continuous ratings of emotional valence to investigate this possibility. Degree of eye constriction (the Duchenne marker) and mouth opening were each uniquely associated with smile intensity and, independently, with cry-face intensity. In addition, degree of eye constriction and mouth opening were each unique predictors of emotion valence ratings. Eye constriction and mouth opening index the intensity of both positive and negative infant facial expressions, suggesting parsimony in the early communication of emotion.


international conference on development and learning | 2012

Intensity measurement of spontaneous facial actions: Evaluation of different image representations

Nazanin Zaker; Mohammad H. Mahoor; Whitney I. Mattson; Daniel S. Messinger; Jeffrey F. Cohn

Intensity measurements of infant facial expressions are central to understand emotion-mediated interactions and emotional development. We evaluate alternative image representations for automatic measurement of the intensity of spontaneous facial Action Units (AUs) related to infant emotion expression. Twelve infants were video-recorded during face-to-face interactions with their mothers. Facial features were tracked using active appearance models (AAMs) and registered to a canonical view. Three feature representations were compared: shape and grey scale texture, Histogram of Oriented Gradients (HOG), and Local Binary Pattern Histograms (LBPH). To reduce the high dimensionality of the appearance features (grey scale texture, HOG, and LBPH), a non-linear algorithm was used (Laplacian Eigenmaps). For each representation, support vector machine classifiers were used to learn six gradations of AU intensity (0 to maximal). The target AUs were those central to positive and negative infant emotion. Shape plus grey scale texture performed best for AUs that involve non-rigid deformations of permanent facial features (e.g., AU 12 and AU 20). These findings suggest that AU intensity detection may be maximized by choosing feature representations best suited for specific AU.


Emotion | 2015

Thrill of victory or agony of defeat? Perceivers fail to utilize information in facial movements.

Hillel Aviezer; Daniel S. Messinger; Shiri Zangvil; Whitney I. Mattson; Devon N. Gangi; Alexander Todorov

Although the distinction between positive and negative facial expressions is assumed to be clear and robust, recent research with intense real-life faces has shown that viewers are unable to reliably differentiate the valence of such expressions (Aviezer, Trope, & Todorov, 2012). Yet, the fact that viewers fail to distinguish these expressions does not in itself testify that the faces are physically identical. In Experiment 1, the muscular activity of victorious and defeated faces was analyzed. Higher numbers of individually coded facial actions--particularly smiling and mouth opening--were more common among winners than losers, indicating an objective difference in facial activity. In Experiment 2, we asked whether supplying participants with valid or invalid information about objective facial activity and valence would alter their ratings. Notwithstanding these manipulations, valence ratings were virtually identical in all groups, and participants failed to differentiate between positive and negative faces. While objective differences between intense positive and negative faces are detectable, human viewers do not utilize these differences in determining valence. These results suggest a surprising dissociation between information present in expressions and information used by perceivers.


affective computing and intelligent interaction | 2013

Head Movement Dynamics during Normal and Perturbed Parent-Infant Interaction

Zakia Hammal; Jeffrey F. Cohn; Daniel S. Messinger; Whitney I. Mattson; Mohammad H. Mahoor

We investigated the dynamics of head motion in parents and infants during an age-appropriate, well-validated emotion induction, the Face-to-Face/Still-Face procedure. Participants were 12 ethnically diverse 6-month-old infants and their mother or father. During infant gaze toward the parent, infant angular amplitude and velocity of pitch and yaw decreased from face-to-face (FF) to still-face (SF) episodes and remained lower in the following Reunion (RE). During infant gaze away from the parent, angular velocity of pitch decreased from FF to SF and remained lower in the RE. Windowed cross-correlation suggested strong bidirectional effects with frequent shifts in the direction of influence. The number of significant positive and negative peaks was higher during FF than RE. Gaze toward and away from the parent was modestly predicted by head orientation. Together, these findings suggest that head motion is strongly related to age-appropriate emotion challenge, are consistent with the hypothesis that perturbations of normal responsiveness carry-over even after the parent resumes normal responsiveness in the reunion, and that there are frequent changes in direction of influence in the postural domain.


PLOS ONE | 2013

Darwin's Duchenne: eye constriction during infant joy and distress.

Whitney I. Mattson; Jeffrey F. Cohn; Mohammad H. Mahoor; Devon N. Gangi; Daniel S. Messinger

Darwin proposed that smiles with eye constriction (Duchenne smiles) index strong positive emotion in infants, while cry-faces with eye constriction index strong negative emotion. Research has supported Darwin’s proposal with respect to smiling, but there has been little parallel research on cry-faces (open-mouth expressions with lateral lip stretching). To investigate the possibility that eye constriction indexes the affective intensity of positive and negative emotions, we first conducted the Face-to-Face/Still-Face (FFSF) procedure at 6 months. In the FFSF, three minutes of naturalistic infant-parent play interaction (which elicits more smiles than cry-faces) are followed by two minutes in which the parent holds an unresponsive still-face (which elicits more cry-faces than smiles). Consistent with Darwin’s proposal, eye constriction was associated with stronger smiling and with stronger cry-faces. In addition, the proportion of smiles with eye constriction was higher during the positive-emotion eliciting play episode than during the still-face. In parallel, the proportion of cry-faces with eye constriction was higher during the negative-emotion eliciting still-face than during play. These results are consonant with the hypothesis that eye constriction indexes the affective intensity of both positive and negative facial configurations. A preponderance of eye constriction during cry-faces was observed in a second elicitor of intense negative emotion, vaccination injections, at both 6 and 12 months of age. The results support the existence of a Duchenne distress expression that parallels the more well-known Duchenne smile. This suggests that eye constriction–the Duchenne marker–has a systematic association with early facial expressions of intense negative and positive emotion.


Social Cognitive and Affective Neuroscience | 2016

Clinical neuroprediction: Amygdala reactivity predicts depressive symptoms 2 years later

Whitney I. Mattson; Luke W. Hyde; Daniel S. Shaw; Erika E. Forbes; Christopher S. Monk

Depression is linked to increased amygdala activation to neutral and negatively valenced facial expressions. Amygdala activation may be predictive of changes in depressive symptoms over time. However, most studies in this area have focused on small, predominantly female and homogenous clinical samples. Studies are needed to examine how amygdala reactivity relates to the course of depressive symptoms dimensionally, prospectively and in populations diverse in gender, race and socioeconomic status. A total of 156 men from predominately low-income backgrounds completed an fMRI task where they viewed emotional facial expressions. Left and right amygdala reactivity to neutral, but not angry or fearful, facial expressions relative to a non-face baseline at age 20 predicted greater depressive symptoms 2 years later, controlling for age 20 depressive symptoms. Heightened bilateral amygdala reactivity to neutral facial expressions predicted increases in depressive symptoms 2 years later in a large community sample. Neutral facial expressions are affectively ambiguous and a tendency to interpret these stimuli negatively may reflect to cognitive biases that lead to increases in depressive symptoms over time. Individual differences in amygdala reactivity to neutral facial expressions appear to identify those at most risk for a more problematic course of depressive symptoms across time.


workshop on applications of computer vision | 2011

Analysis of eye gaze pattern of infants at risk of autism spectrum disorder using Markov models

David Alie; Mohammad H. Mahoor; Whitney I. Mattson; Daniel R. Anderson; Daniel S. Messinger

This paper presents the possibility of using pattern recognition algorithms of infant gaze patterns at six months of age among children at high risk for an autism spectrum disorder (ASD). ASDs, which must be diagnosed by 3 years of age, are characterized by communication and interaction impairments which frequently involve disturbances of visual attention and gaze patterning. We used video cameras to record the face-to-face interactions of 32 infant subjects with their parents. The video was manually coded to determine the eye gaze pattern of infants by marking where the infant was looking in each frame (either at their parents face or away from their parents face). In order to identify infants ASD diagnosis at three years, we analyzed infant eye gaze patterns at six months. Variable-order Markov Models (VMM) were used to create models for typically developing comparison children as well as children with an ASD. The models correctly classified infants who did and did not develop an ASD diagnosis with an accuracy rate of 93.75 percent. Employing an assessment tool at a very young age offers the hope of early intervention, potentially mitigating the effects of the disorder throughout the rest of the childs life.


Scientific Reports | 2017

Third-person self-talk facilitates emotion regulation without engaging cognitive control: Converging evidence from ERP and fMRI

Jason S. Moser; Adrienne Dougherty; Whitney I. Mattson; Benjamin Katz; Tim P. Moran; Darwin A. Guevarra; Holly Shablack; Ozlem Ayduk; John Jonides; Marc G. Berman; Ethan Kross

Does silently talking to yourself in the third-person constitute a relatively effortless form of self control? We hypothesized that it does under the premise that third-person self-talk leads people to think about the self similar to how they think about others, which provides them with the psychological distance needed to facilitate self control. We tested this prediction by asking participants to reflect on feelings elicited by viewing aversive images (Study 1) and recalling negative autobiographical memories (Study 2) using either “I” or their name while measuring neural activity via ERPs (Study 1) and fMRI (Study 2). Study 1 demonstrated that third-person self-talk reduced an ERP marker of self-referential emotional reactivity (i.e., late positive potential) within the first second of viewing aversive images without enhancing an ERP marker of cognitive control (i.e., stimulus preceding negativity). Conceptually replicating these results, Study 2 demonstrated that third-person self-talk was linked with reduced levels of activation in an a priori defined fMRI marker of self-referential processing (i.e., medial prefrontal cortex) when participants reflected on negative memories without eliciting increased levels of activity in a priori defined fMRI markers of cognitive control. Together, these results suggest that third-person self-talk may constitute a relatively effortless form of self-control.


Developmental Cognitive Neuroscience | 2017

The influence of 5-HTTLPR transporter genotype on amygdala-subgenual anterior cingulate cortex connectivity in autism spectrum disorder.

Francisco Velasquez; Jillian Lee Wiggins; Whitney I. Mattson; Donna M. Martin; Catherine Lord; Christopher S. Monk

Social deficits in autism spectrum disorder (ASD) are linked to amygdala functioning and functional connection between the amygdala and subgenual anterior cingulate cortex (sACC) is involved in the modulation of amygdala activity. Impairments in behavioral symptoms and amygdala activation and connectivity with the sACC seem to vary by serotonin transporter-linked polymorphic region (5-HTTLPR) variant genotype in diverse populations. The current preliminary investigation examines whether amygdala-sACC connectivity differs by 5-HTTLPR genotype and relates to social functioning in ASD. A sample of 108 children and adolescents (44 ASD) completed an fMRI face-processing task. Youth with ASD and low expressing 5-HTTLPR genotypes showed significantly greater connectivity than youth with ASD and higher expressing genotypes as well as typically developing (TD) individuals with both low and higher expressing genotypes, in the comparison of happy vs. baseline faces and happy vs. neutral faces. Moreover, individuals with ASD and higher expressing genotypes exhibit a negative relationship between amygdala-sACC connectivity and social dysfunction. Altered amygdala-sACC coupling based on 5-HTTLPR genotype may help explain some of the heterogeneity in neural and social function observed in ASD. This is the first ASD study to combine genetic polymorphism analyses and functional connectivity in the context of a social task.


ieee international conference on automatic face gesture recognition | 2013

A comparison of alternative classifiers for detecting occurrence and intensity in spontaneous facial expression of infants with their mothers

Nazanin Zaker; Mohammad H. Mahoor; Whitney I. Mattson; Daniel S. Messinger; Jeffrey F. Cohn

To model the dynamics of social interaction, it is necessary both to detect specific Action Units (AUs) and variation in their intensity and coordination over time. An automated method that performs well when detecting occurrence may or may not perform well for intensity measurements. We compared two dimensionality reduction approaches - Principal Components Analysis with Large Margin Nearest Neighbor (PCA+LMNN) and Laplacian Eigenmap - and two classifiers, SVM and K-Nearest Neighbor. Twelve infants were video-recorded during face-to-face interactions with their mothers. AUs related to positive and negative affect were manually coded from the video by certified FACS coders. Facial features were tracked using Active Appearance Models (AAM) and registered to a canonical view before extracting Histogram of Oriented Gradients (HOG) features. All possible combinations of dimensionality reduction approaches and classifiers were tested using a leave-onesubject-out cross-validation. For detecting consistency (i.e. reliability as measured by ICC), PCA+LMNN and SVM classifiers gave best results.

Collaboration


Dive into the Whitney I. Mattson's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge