Ruth B. Grossman
Emerson College
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Ruth B. Grossman.
Journal of Autism and Developmental Disorders | 2012
Ruth B. Grossman; Helen Tager-Flusberg
Data on emotion processing by individuals with ASD suggest both intact abilities and significant deficits. Signal intensity may be a contributing factor to this discrepancy. We presented low- and high-intensity emotional stimuli in a face-voice matching task to 22 adolescents with ASD and 22 typically developing (TD) peers. Participants heard semantically neutral sentences with happy, surprised, angry, and sad prosody presented at two intensity levels (low, high) and matched them to emotional faces. The facial expression choice was either across- or within-valence. Both groups were less accurate for low-intensity emotions, but the ASD participants’ accuracy levels dropped off more sharply. ASD participants were significantly less accurate than their TD peers for trials involving low-intensity emotions and within-valence face contrasts.
Journal of Child Psychology and Psychiatry | 2009
Ruth B. Grossman; Matthew H. Schneps; Helen Tager-Flusberg
BACKGROUND It has frequently been suggested that individuals with autism spectrum disorder (ASD) have deficits in auditory-visual (AV) sensory integration. Studies of language integration have mostly used non-word syllables presented in congruent and incongruent AV combinations and demonstrated reduced influence of visual speech in individuals with ASD. The aim of our study was to test whether adolescents with high-functioning autism are able to integrate AV information of meaningful, phrase-length language in a task of onset asynchrony detection. METHODS Participants were 25 adolescents with ASD and 25 typically developing (TD) controls. The stimuli were video clips of complete phrases using simple, commonly occurring words. The clips were digitally manipulated to have the video precede the corresponding audio by 0, 4, 6, 8, 10, 12, or 14 video frames, a range of 0-500ms. Participants were shown the video clips in random order and asked to indicate whether each clip was in-synch or not. RESULTS There were no differences between adolescents with ASD and their TD peers in accuracy of onset asynchrony detection at any slip rate. CONCLUSION These data indicate that adolescents with ASD are able to integrate auditory and visual components in a task of onset asynchrony detection using natural, phrase-length language stimuli. We propose that the meaningful nature of the language stimuli in combination with presentation in a non-distracting environment allowed adolescents with autism spectrum disorder to demonstrate preserved accuracy for bi-modal AV integration.
Scientific Reports | 2017
Noah J. Sasson; Daniel J. Faso; Jack Nugent; Sarah Lovell; Daniel P. Kennedy; Ruth B. Grossman
Individuals with autism spectrum disorder (ASD), including those who otherwise require less support, face severe difficulties in everyday social interactions. Research in this area has primarily focused on identifying the cognitive and neurological differences that contribute to these social impairments, but social interaction by definition involves more than one person and social difficulties may arise not just from people with ASD themselves, but also from the perceptions, judgments, and social decisions made by those around them. Here, across three studies, we find that first impressions of individuals with ASD made from thin slices of real-world social behavior by typically-developing observers are not only far less favorable across a range of trait judgments compared to controls, but also are associated with reduced intentions to pursue social interaction. These patterns are remarkably robust, occur within seconds, do not change with increased exposure, and persist across both child and adult age groups. However, these biases disappear when impressions are based on conversational content lacking audio-visual cues, suggesting that style, not substance, drives negative impressions of ASD. Collectively, these findings advocate for a broader perspective of social difficulties in ASD that considers both the individual’s impairments and the biases of potential social partners.
Sign Language Studies | 2006
Ruth B. Grossman; Judy Anne Shepard-Kegl
American Sign Language uses the face to express vital components of grammar in addition to the more universal expressions of emotion. The study of ASL facial expressions has focused mostly on the perception and categorization of various expression types by signing and nonsigning subjects. Only a few studies of the production of ASL facial expression exist, and those rely mainly on descriptions and comparisons of individual sentences. The purpose of this article is to present a novel and multilevel approach for the coding and quantification of ASL facial expressions. The technique combines ASL coding software with novel postcoding analyses that allow for graphic depictions and group comparisons of the different facial expression types. This system enables us to clearly delineate differences in the production of otherwise similar facial expression types.
Autism | 2015
Ruth B. Grossman
We form first impressions of many traits based on very short interactions. This study examines whether typical adults judge children with high-functioning autism to be more socially awkward than their typically developing peers based on very brief exposure to still images, audio-visual, video-only, or audio-only information. We used video and audio recordings of children with and without high-functioning autism captured during a story-retelling task. Typically developing adults were presented with 1 s and 3 s clips of these children, as well as still images, and asked to judge whether the person in the clip was socially awkward. Our findings show that participants who are naïve to diagnostic differences between the children in the clips judged children with high-functioning autism to be socially awkward at a significantly higher rate than their typically developing peers. These results remain consistent for exposures as short as 1 s to visual and/or auditory information, as well as for still images. These data suggest that typical adults use subtle nonverbal and non-linguistic cues produced by children with high-functioning autism to form rapid judgments of social awkwardness with the potential for significant repercussions in social interactions.
Autism Research | 2015
Ruth B. Grossman; Erin Steinhart; Teresa V. Mitchell; William J. McIlvane
Conversation requires integration of information from faces and voices to fully understand the speakers message. To detect auditory‐visual asynchrony of speech, listeners must integrate visual movements of the face, particularly the mouth, with auditory speech information. Individuals with autism spectrum disorder may be less successful at such multisensory integration, despite their demonstrated preference for looking at the mouth region of a speaker. We showed participants (individuals with and without high‐functioning autism (HFA) aged 8–19) a split‐screen video of two identical individuals speaking side by side. Only one of the speakers was in synchrony with the corresponding audio track and synchrony switched between the two speakers every few seconds. Participants were asked to watch the video without further instructions (implicit condition) or to specifically watch the in‐synch speaker (explicit condition). We recorded which part of the screen and face their eyes targeted. Both groups looked at the in‐synch video significantly more with explicit instructions. However, participants with HFA looked at the in‐synch video less than typically developing (TD) peers and did not increase their gaze time as much as TD participants in the explicit task. Importantly, the HFA group looked significantly less at the mouth than their TD peers, and significantly more at non‐face regions of the image. There were no between‐group differences for eye‐directed gaze. Overall, individuals with HFA spend less time looking at the crucially important mouth region of the face during auditory‐visual speech integration, which is maladaptive gaze behavior for this type of task. Autism Res 2015, 8: 307–316.
international conference on acoustics, speech, and signal processing | 2015
Tanaya Guha; Zhaojun Yang; Anil Ramakrishna; Ruth B. Grossman; Darren Hedley; Sungbok Lee; Shrikanth Narayanan
Children with Autism Spectrum Disorder (ASD) are known to have difficulty in producing and perceiving emotional facial expressions. Their expressions are often perceived as atypical by adult observers. This paper focuses on data driven ways to analyze and quantify atypicality in facial expressions of children with ASD. Our objective is to uncover those characteristics of facial gestures that induce the sense of perceived atypicality in observers. Using a carefully collected motion capture database, facial expressions of children with and without ASD are compared within six basic emotion categories employing methods from information theory, time-series modeling and statistical analysis. Our experiments show that children with ASD exhibit lower complexity in facial dynamics, with the eye regions contributing more than other facial regions towards the differences between children with and without ASD. Our study also notes that children with ASD exhibit lower left-right facial symmetry, and more uniform motion intensity across facial regions.
international conference on multimedia and expo | 2013
Angeliki Metallinou; Ruth B. Grossman; Shrikanth Narayanan
We focus on the analysis, quantification and visualization of atypicality in affective facial expressions of children with High Functioning Autism (HFA). We examine facial Motion Capture data from typically developing (TD) children and children with HFA, using various statistical methods, including Functional Data Analysis, in order to quantify atypical expression characteristics and uncover patterns of expression evolution in the two populations. Our results show that children with HFA display higher asynchrony of motion between facial regions, more rough facial and head motion, and a larger range of facial region motion. Overall, subjects with HFA consistently display a wider variability in the expressive facial gestures that they employ. Our analysis demonstrates the utility of computational approaches for understanding behavioral data and brings new insights into the autism domain regarding the atypicality that is often associated with facial expressions of subjects with HFA.
IEEE Transactions on Affective Computing | 2018
Tanaya Guha; Zhaojun Yang; Ruth B. Grossman; Shrikanth Narayanan
Several studies have established that facial expressions of children with autism are often perceived as atypical, awkward or less engaging by typical adult observers. Despite this clear deficit in the quality of facial expression production, very little is understood about its underlying mechanisms and characteristics. This paper takes a computational approach to studying details of facial expressions of children with high functioning autism (HFA). The objective is to uncover those characteristics of facial expressions, notably distinct from those in typically developing children, and which are otherwise difficult to detect by visual inspection. We use motion capture data obtained from subjects with HFA and typically developing subjects while they produced various facial expressions. This data is analyzed to investigate how the overall and local facial dynamics of children with HFA differ from their typically developing peers. Our major observations include reduced complexity in the dynamic facial behavior of the HFA group arising primarily from the eye region.
conference of the international speech communication association | 2016
Anil Ramakrishna; Rahul Gupta; Ruth B. Grossman; Shrikanth Narayanan
Ratings from multiple human annotators are often pooled in applications where the ground truth is hidden. Examples include annotating perceived emotions and assessing quality metrics for speech and image. These ratings are not restricted to a single dimension and can be multidimensional. In this paper, we propose an Expectation-Maximization based algorithm to model such ratings. Our model assumes that there exists a latent multidimensional ground truth that can be determined from the observation features and that the ratings provided by the annotators are noisy versions of the ground truth. We test our model on a study conducted on children with autism to predict a four dimensional rating of expressivity, naturalness, pronunciation goodness and engagement. Our goal in this application is to reliably predict the individual annotator ratings which can be used to address issues of cognitive load on the annotators as well as the rating cost. We initially train a baseline directly predicting annotator ratings from the features and compare it to our model under three different settings assuming: (i) each entry in the multidimensional rating is independent of others, (ii) a joint distribution among rating dimensions exists, (iii) a partial set of ratings to predict the remaining entries is available.