Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sharmistha Gray is active.

Publication


Featured researches published by Sharmistha Gray.


Pediatrics | 2009

Teaching by Listening: The Importance of Adult-Child Conversations to Language Development

Frederick J. Zimmerman; Jill Gilkerson; Jeffrey A. Richards; Dimitri A. Christakis; Dongxin Xu; Sharmistha Gray; Umit H. Yapanel

OBJECTIVE: To test the independent association of adult language input, television viewing, and adult-child conversations on language acquisition among infants and toddlers. METHODS: Two hundred seventy-five families of children aged 2 to 48 months who were representative of the US census were enrolled in a cross-sectional study of the home language environment and child language development (phase 1). Of these, a representative sample of 71 families continued for a longitudinal assessment over 18 months (phase 2). In the cross-sectional sample, language development scores were regressed on adult word count, television viewing, and adult-child conversations, controlling for socioeconomic attributes. In the longitudinal sample, phase 2 language development scores were regressed on phase 1 language development, as well as phase 1 adult word count, television viewing, and adult-child conversations, controlling for socioeconomic attributes. RESULTS: In fully adjusted regressions, the effects of adult word count were significant when included alone but were partially mediated by adult-child conversations. Television viewing when included alone was significant and negative but was fully mediated by the inclusion of adult-child conversations. Adult-child conversations were significant when included alone and retained both significance and magnitude when adult word count and television exposure were included. CONCLUSIONS: Television exposure is not independently associated with child language development when adult-child conversations are controlled. Adult-child conversations are robustly associated with healthy language development. Parents should be encouraged not merely to provide language input to their children through reading or storytelling, but also to engage their children in two-sided conversations.


Proceedings of the National Academy of Sciences of the United States of America | 2010

Automated vocal analysis of naturalistic recordings from children with autism, language delay, and typical development

D. K. Oller; P. Niyogi; Sharmistha Gray; Jeffrey A. Richards; Jill Gilkerson; Dongxin Xu; Umit H. Yapanel; Steven F. Warren

For generations the study of vocal development and its role in language has been conducted laboriously, with human transcribers and analysts coding and taking measurements from small recorded samples. Our research illustrates a method to obtain measures of early speech development through automated analysis of massive quantities of day-long audio recordings collected naturalistically in childrens homes. A primary goal is to provide insights into the development of infant control over infrastructural characteristics of speech through large-scale statistical analysis of strategically selected acoustic parameters. In pursuit of this goal we have discovered that the first automated approach we implemented is not only able to track childrens development on acoustic parameters known to play key roles in speech, but also is able to differentiate vocalizations from typically developing children and children with autism or language delay. The method is totally automated, with no human intervention, allowing efficient sampling and analysis at unprecedented scales. The work shows the potential to fundamentally enhance research in vocal development and to add a fully objective measure to the battery used to detect speech-related disorders in early childhood. Thus, automated analysis should soon be able to contribute to screening and diagnosis procedures for early disorders, and more generally, the findings suggest fundamental methods for the study of language in natural environments.


JAMA Pediatrics | 2009

Audible Television and Decreased Adult Words, Infant Vocalizations, and Conversational Turns: A Population-Based Study

Dimitri A. Christakis; Jill Gilkerson; Jeffrey A. Richards; Frederick J. Zimmerman; Michelle M. Garrison; Dongxin Xu; Sharmistha Gray; Umit H. Yapanel

OBJECTIVE To test the hypothesis that audible television is associated with decreased parent and child interactions. DESIGN Prospective, population-based observational study. SETTING Community. PARTICIPANTS Three hundred twenty-nine 2- to 48-month-old children. MAIN EXPOSURES Audible television. Children wore a digital recorder on random days for up to 24 months. A software program incorporating automatic speech-identification technology processed the recorded file to analyze the sounds the children were exposed to and the sounds they made. Conditional linear regression was used to determine the association between audible television and the outcomes of interest. OUTCOME MEASURES Adult word counts, child vocalizations, and child conversational turns. RESULTS Each hour of audible television was associated with significant reductions in age-adjusted z scores for child vocalizations (linear regression coefficient, -0.26; 95% confidence interval [CI], -0.29 to -0.22), vocalization duration (linear regression coefficient, -0.24; 95% CI, -0.27 to -0.20), and conversational turns (linear regression coefficient, -0.22; 95% CI, -0.25 to -0.19). There were also significant reductions in adult female (linear regression coefficient, -636; 95% CI, -812 to -460) and adult male (linear regression coefficient, -134; 95% CI, -263 to -5) word count. CONCLUSIONS Audible television is associated with decreased exposure to discernible human adult speech and decreased child vocalizations. These results may explain the association between infant television exposure and delayed language development.


Journal of Autism and Developmental Disorders | 2010

What Automated Vocal Analysis Reveals about the Vocal Production and Language Learning Environment of Young Children with Autism.

Steven F. Warren; Jill Gilkerson; Jeffrey A. Richards; D. Kimbrough Oller; Dongxin Xu; Umit H. Yapanel; Sharmistha Gray

The study compared the vocal production and language learning environments of 26 young children with autism spectrum disorder (ASD) to 78 typically developing children using measures derived from automated vocal analysis. A digital language processor and audio-processing algorithms measured the amount of adult words to children and the amount of vocalizations they produced during 12-h recording periods in their natural environments. The results indicated significant differences between typically developing children and children with ASD in the characteristics of conversations, the number of conversational turns, and in child vocalizations that correlated with parent measures of various child characteristics. Automated measurement of the language learning environment of young children with ASD reveals important differences from the environments experienced by typically developing children.


Speech Communication | 2010

Automatic voice onset time detection for unvoiced stops (/p/,/t/,/k/) with application to accent classification

John H. L. Hansen; Sharmistha Gray; Wooil Kim

Articulation characteristics of particular phonemes can provide cues to distinguish accents in spoken English. For example, as shown in Arslan and Hansen (1996, 1997), Voice Onset Time (VOT) can be used to classify mandarin, Turkish, German and American accented English. Our goal in this study is to develop an automatic system that classifies accents using VOT in unvoiced stops. VOT is an important temporal feature which is often overlooked in speech perception, speech recognition, as well as accent detection. Fixed length frame-based speech processing inherently ignores VOT. In this paper, a more effective VOT detection scheme using the non-linear energy tracking algorithm Teager Energy Operator (TEO), across a sub-frequency band partition for unvoiced stops (/p/, /t/ and /k/), is introduced. The proposed VOT detection algorithm also incorporates spectral differences in the Voice Onset Region (VOR) and the succeeding vowel of a given stop-vowel sequence to classify speakers having accents due to different ethnic origin. The spectral cues are enhanced using one of the four types of feature parameter extractions - Discrete Mellin Transform (DMT), Discrete Mellin Fourier Transform (DMFT) and Discrete Wavelet Transform using the lowest and the highest frequency resolutions (DWTlfr and DWThfr). A Hidden Markov Model (HMM) classifier is employed with these extracted parameters and applied to the problem of accent classification. Three different language groups (American English, Chinese, and Indian) are used from the CU-Accent database. The VOT is detected with less than 10% error when compared to the manual detected VOT with a success rate of 79.90%, 87.32% and 47.73% for English, Chinese and Indian speakers (includes atypical cases for Indian case), respectively. It is noted that the DMT and DWTlfr features are good for parameterizing speech samples which exhibit substitution of succeeding vowel after the stop in accented speech. The successful accent classification rates of DMT and DWTlfr features are 66.13% and 71.67%, for /p/ and /t/ respectively, for pairwise accent detection. Alternatively, the DMFT feature works on all accent sensitive words considered, with a success rate of 70.63%. This study shows that effective VOT detection can be achieved using an integrated TEO processing with spectral difference analysis in the VOR that can be employed for accent classification.


Autism Research | 2013

Stability and Validity of an Automated Measure of Vocal Development From Day-Long Samples in Children With and Without Autism Spectrum Disorder

Paul J. Yoder; D. K. Oller; Jeffrey A. Richards; Sharmistha Gray; Jill Gilkerson

Individual difference measures of vocal development may eventually aid our understanding of the variability in spoken language acquisition in children with autism spectrum disorder (ASD). Large samples of child vocalizations may be needed to maximize the stability of vocal development estimates. Day‐long vocal samples can now be automatically analyzed based on acoustic characteristics of speech likeness identified in theoretically driven and empirically cross‐validated quantitative models of typical vocal development. This report indicates that a single day‐long recording can produce a stable estimate for a measure of vocal development that is highly related to expressive spoken language in a group of young children with ASD and in a group that is typically developing. Autism Res 2013, 6: 103–107.


Ear and Hearing | 2015

Automated Vocal Analysis of Children With Hearing Loss and Their Typical and Atypical Peers.

Mark VanDam; D. Kimbrough Oller; Sophie E. Ambrose; Sharmistha Gray; Jeffrey A. Richards; Dongxin Xu; Jill Gilkerson; Noah H. Silbert; Mary Pat Moeller

Objectives: This study investigated automatic assessment of vocal development in children with hearing loss compared with children who are typically developing, have language delays, and have autism spectrum disorder. Statistical models are examined for performance in a classification model and to predict age within the four groups of children. Design: The vocal analysis system analyzed 1913 whole-day, naturalistic acoustic recordings from 273 toddlers and preschoolers comprising children who were typically developing, hard of hearing, language delayed, or autistic. Results: Samples from children who were hard of hearing patterned more similarly to those of typically developing children than to the language delayed or autistic samples. The statistical models were able to classify children from the four groups examined and estimate developmental age based on automated vocal analysis. Conclusions: This work shows a broad similarity between children with hearing loss and typically developing children, although children with hearing loss show some delay in their production of speech. Automatic acoustic analysis can now be used to quantitatively compare vocal development in children with and without speech-related disorders. The work may serve to better distinguish among various developmental disorders and ultimately contribute to improved intervention.


Autism Research | 2017

The stability and validity of automated vocal analysis in preverbal preschoolers with autism spectrum disorder.

Tiffany Woynaroski; D. Kimbrough Oller; Bahar Keceli-Kaysili; Dongxin Xu; Jeffrey A. Richards; Jill Gilkerson; Sharmistha Gray; Paul J. Yoder

Theory and research suggest that vocal development predicts “useful speech” in preschoolers with autism spectrum disorder (ASD), but conventional methods for measurement of vocal development are costly and time consuming. This longitudinal correlational study examines the reliability and validity of several automated indices of vocalization development relative to an index derived from human coded, conventional communication samples in a sample of preverbal preschoolers with ASD. Automated indices of vocal development were derived using software that is presently “in development” and/or only available for research purposes and using commercially available Language ENvironment Analysis (LENA) software. Indices of vocal development that could be derived using the software available for research purposes: (a) were highly stable with a single day‐long audio recording, (b) predicted future spoken vocabulary to a degree that was nonsignificantly different from the index derived from conventional communication samples, and (c) continued to predict future spoken vocabulary even after controlling for concurrent vocabulary in our sample. The score derived from standard LENA software was similarly stable, but was not significantly correlated with future spoken vocabulary. Findings suggest that automated vocal analysis is a valid and reliable alternative to time intensive and expensive conventional communication samples for measurement of vocal development of preverbal preschoolers with ASD in research and clinical practice. Autism Res 2017, 10: 508–519.


Proceedings of the 2nd Workshop on Child, Computer and Interaction | 2009

Automatic childhood autism detection by vocalization decomposition with phone-like units

Dongxin Xu; Jeffrey A. Richards; Jill Gilkerson; Umit H. Yapanel; Sharmistha Gray; John H. L. Hansen

Autism is a major child development disorder with a prevalence of 1/150 in the US [22]. Although early identification is crucial to early intervention, there currently are few efficient screening tools in clinical use. This study reports a fully automatic mechanism for child autism detection/screening using the LENA#8482; (Language ENvironment Analysis) System, which utilizes speech signal processing technology to analyze and monitor a childs natural language environment and the vocalizations/speech of the child. We previously reported preliminary results in [19] using child vocalization composition information generated automatically by the LENA System employing an adult phone model. In this paper, some extensions have been made, including enlargement of the dataset, introduction of a new child vocalization decomposition with the k-means clusters derived directly from the child vocalizations, and its combination with the previous decomposition. The experiment and comparison consistently shows that the child vocalization composition contains rich discriminant information for autism detection. It also shows that the child vocalization composition features generated with the adult phone-model and the child clusters perform similarly when individually used, and complement each other when combined. The combined feature set significantly reduces the error rate. The relative error reduction is 21.7% at the recording-level and 16.8% at the child-level, achieving detection accuracies of 87.4% for recordings and 90.6% for children at the equal-error-rate points.


Journal of Speech Language and Hearing Research | 2017

Automated Assessment of Child Vocalization Development Using LENA.

Jeffrey A. Richards; Dongxin Xu; Jill Gilkerson; Umit H. Yapanel; Sharmistha Gray; Terrance Paul

Purpose To produce a novel, efficient measure of childrens expressive vocal development on the basis of automatic vocalization assessment (AVA), child vocalizations were automatically identified and extracted from audio recordings using Language Environment Analysis (LENA) System technology. Method Assessment was based on full-day audio recordings collected in a childs unrestricted, natural language environment. AVA estimates were derived using automatic speech recognition modeling techniques to categorize and quantify the sounds in child vocalizations (e.g., protophones and phonemes). These were expressed as phone and biphone frequencies, reduced to principal components, and inputted to age-based multiple linear regression models to predict independently collected criterion-expressive language scores. From these models, we generated vocal development AVA estimates as age-standardized scores and development age estimates. Result AVA estimates demonstrated strong statistical reliability and validity when compared with standard criterion expressive language assessments. Conclusions Automated analysis of child vocalizations extracted from full-day recordings in natural settings offers a novel and efficient means to assess childrens expressive vocal development. More research remains to identify specific mechanisms of operation.

Collaboration


Dive into the Sharmistha Gray's collaboration.

Top Co-Authors

Avatar

Dongxin Xu

University of Colorado Boulder

View shared research outputs
Top Co-Authors

Avatar

Jill Gilkerson

University of Colorado Boulder

View shared research outputs
Top Co-Authors

Avatar

Umit H. Yapanel

University of Colorado Boulder

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

John H. L. Hansen

University of Texas at Dallas

View shared research outputs
Top Co-Authors

Avatar

D. Kimbrough Oller

Konrad Lorenz Institute for Evolution and Cognition Research

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mark VanDam

Washington State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge