Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jill Gilkerson is active.

Publication


Featured researches published by Jill Gilkerson.


Pediatrics | 2009

Teaching by Listening: The Importance of Adult-Child Conversations to Language Development

Frederick J. Zimmerman; Jill Gilkerson; Jeffrey A. Richards; Dimitri A. Christakis; Dongxin Xu; Sharmistha Gray; Umit H. Yapanel

OBJECTIVE: To test the independent association of adult language input, television viewing, and adult-child conversations on language acquisition among infants and toddlers. METHODS: Two hundred seventy-five families of children aged 2 to 48 months who were representative of the US census were enrolled in a cross-sectional study of the home language environment and child language development (phase 1). Of these, a representative sample of 71 families continued for a longitudinal assessment over 18 months (phase 2). In the cross-sectional sample, language development scores were regressed on adult word count, television viewing, and adult-child conversations, controlling for socioeconomic attributes. In the longitudinal sample, phase 2 language development scores were regressed on phase 1 language development, as well as phase 1 adult word count, television viewing, and adult-child conversations, controlling for socioeconomic attributes. RESULTS: In fully adjusted regressions, the effects of adult word count were significant when included alone but were partially mediated by adult-child conversations. Television viewing when included alone was significant and negative but was fully mediated by the inclusion of adult-child conversations. Adult-child conversations were significant when included alone and retained both significance and magnitude when adult word count and television exposure were included. CONCLUSIONS: Television exposure is not independently associated with child language development when adult-child conversations are controlled. Adult-child conversations are robustly associated with healthy language development. Parents should be encouraged not merely to provide language input to their children through reading or storytelling, but also to engage their children in two-sided conversations.


Proceedings of the National Academy of Sciences of the United States of America | 2010

Automated vocal analysis of naturalistic recordings from children with autism, language delay, and typical development

D. K. Oller; P. Niyogi; Sharmistha Gray; Jeffrey A. Richards; Jill Gilkerson; Dongxin Xu; Umit H. Yapanel; Steven F. Warren

For generations the study of vocal development and its role in language has been conducted laboriously, with human transcribers and analysts coding and taking measurements from small recorded samples. Our research illustrates a method to obtain measures of early speech development through automated analysis of massive quantities of day-long audio recordings collected naturalistically in childrens homes. A primary goal is to provide insights into the development of infant control over infrastructural characteristics of speech through large-scale statistical analysis of strategically selected acoustic parameters. In pursuit of this goal we have discovered that the first automated approach we implemented is not only able to track childrens development on acoustic parameters known to play key roles in speech, but also is able to differentiate vocalizations from typically developing children and children with autism or language delay. The method is totally automated, with no human intervention, allowing efficient sampling and analysis at unprecedented scales. The work shows the potential to fundamentally enhance research in vocal development and to add a fully objective measure to the battery used to detect speech-related disorders in early childhood. Thus, automated analysis should soon be able to contribute to screening and diagnosis procedures for early disorders, and more generally, the findings suggest fundamental methods for the study of language in natural environments.


JAMA Pediatrics | 2009

Audible Television and Decreased Adult Words, Infant Vocalizations, and Conversational Turns: A Population-Based Study

Dimitri A. Christakis; Jill Gilkerson; Jeffrey A. Richards; Frederick J. Zimmerman; Michelle M. Garrison; Dongxin Xu; Sharmistha Gray; Umit H. Yapanel

OBJECTIVE To test the hypothesis that audible television is associated with decreased parent and child interactions. DESIGN Prospective, population-based observational study. SETTING Community. PARTICIPANTS Three hundred twenty-nine 2- to 48-month-old children. MAIN EXPOSURES Audible television. Children wore a digital recorder on random days for up to 24 months. A software program incorporating automatic speech-identification technology processed the recorded file to analyze the sounds the children were exposed to and the sounds they made. Conditional linear regression was used to determine the association between audible television and the outcomes of interest. OUTCOME MEASURES Adult word counts, child vocalizations, and child conversational turns. RESULTS Each hour of audible television was associated with significant reductions in age-adjusted z scores for child vocalizations (linear regression coefficient, -0.26; 95% confidence interval [CI], -0.29 to -0.22), vocalization duration (linear regression coefficient, -0.24; 95% CI, -0.27 to -0.20), and conversational turns (linear regression coefficient, -0.22; 95% CI, -0.25 to -0.19). There were also significant reductions in adult female (linear regression coefficient, -636; 95% CI, -812 to -460) and adult male (linear regression coefficient, -134; 95% CI, -263 to -5) word count. CONCLUSIONS Audible television is associated with decreased exposure to discernible human adult speech and decreased child vocalizations. These results may explain the association between infant television exposure and delayed language development.


Journal of Autism and Developmental Disorders | 2010

What Automated Vocal Analysis Reveals about the Vocal Production and Language Learning Environment of Young Children with Autism.

Steven F. Warren; Jill Gilkerson; Jeffrey A. Richards; D. Kimbrough Oller; Dongxin Xu; Umit H. Yapanel; Sharmistha Gray

The study compared the vocal production and language learning environments of 26 young children with autism spectrum disorder (ASD) to 78 typically developing children using measures derived from automated vocal analysis. A digital language processor and audio-processing algorithms measured the amount of adult words to children and the amount of vocalizations they produced during 12-h recording periods in their natural environments. The results indicated significant differences between typically developing children and children with ASD in the characteristics of conversations, the number of conversational turns, and in child vocalizations that correlated with parent measures of various child characteristics. Automated measurement of the language learning environment of young children with ASD reveals important differences from the environments experienced by typically developing children.


Psychological Science | 2014

A Social Feedback Loop for Speech Development and Its Reduction in Autism

Anne S. Warlaumont; Jeffrey A. Richards; Jill Gilkerson; D. Kimbrough Oller

We analyzed the microstructure of child-adult interaction during naturalistic, daylong, automatically labeled audio recordings (13,836 hr total) of children (8- to 48-month-olds) with and without autism. We found that an adult was more likely to respond when the child’s vocalization was speech related rather than not speech related. In turn, a child’s vocalization was more likely to be speech related if the child’s previous speech-related vocalization had received an immediate adult response rather than no response. Taken together, these results are consistent with the idea that there is a social feedback loop between child and caregiver that promotes speech development. Although this feedback loop applies in both typical development and autism, children with autism produced proportionally fewer speech-related vocalizations, and the responses they received were less contingent on whether their vocalizations were speech related. We argue that such differences will diminish the strength of the social feedback loop and have cascading effects on speech development over time. Differences related to socioeconomic status are also reported.


Communication Disorders Quarterly | 2011

Assessing Children's Home Language Environments Using Automatic Speech Recognition Technology

Charles R. Greenwood; Kathy Thiemann-Bourque; Dale Walker; Jay Buzhardt; Jill Gilkerson

The purpose of this research was to replicate and extend some of the findings of Hart and Risley using automatic speech processing instead of human transcription of language samples. The long-term goal of this work is to make the current approach to speech processing possible by researchers and clinicians working on a daily basis with families and young children. Twelve hour-long, digital audio recordings were obtained repeatedly in the homes of middle to upper SES families for a sample of typically developing infants and toddlers (N = 30). These recordings were processed automatically using a measurement framework based on the work of Hart and Risley. Like Hart and Risley, the current findings indicated vast differences in individual children’s home language environments (i.e., adult word count), children’s vocalizations, and conversational turns. Automated processing compared favorably to the original Hart and Risley estimates that were based on transcription. Adding to Hart and Risley’s findings were new descriptions of patterns of daily talk and relationships to widely used outcome measures, among others. Implications for research and practice are discussed.


international conference of the ieee engineering in medicine and biology society | 2009

Child vocalization composition as discriminant information for automatic autism detection

Dongxin Xu; Jill Gilkerson; Jeffrey A. Richards; Umit H. Yapanel; Sharmi Gray

Early identification is crucial for young children with autism to access early intervention. The existing screens require either a parent-report questionnaire and/or direct observation by a trained practitioner. Although an automatic tool would benefit parents, clinicians and children, there is no automatic screening tool in clinical use. This study reports a fully automatic mechanism for autism detection/screening for young children. This is a direct extension of the LENATM (Language ENvironment Analysis) system, which utilizes speech signal processing technology to analyze and monitor a childs natural language environment and the vocalizations/speech of the child. It is discovered that child vocalization composition contains rich discriminant information for autism detection. By applying pattern recognition and machine learning approaches to child vocalization composition data, accuracy rates of 85% to 90% in cross-validation tests for autism detection have been achieved at the equal-error-rate (EER) point on a data set with 34 children with autism, 30 language delayed children and 76 typically developing children. Due to its easy and automatic procedure, it is believed that this new tool can serve a significant role in childhood autism screening, especially in regards to population-based or universal screening.


Journal of Speech Language and Hearing Research | 2014

Automated Analysis of Child Phonetic Production Using Naturalistic Recordings

Dongxin Xu; Jeffrey A. Richards; Jill Gilkerson

PURPOSE Conventional resource-intensive methods for child phonetic development studies are often impractical for sampling and analyzing child vocalizations in sufficient quantity. The purpose of this study was to provide new information on early language development by an automated analysis of child phonetic production using naturalistic recordings. The new approach was evaluated relative to conventional manual transcription methods. Its effectiveness was demonstrated by a case study with 106 children with typical development (TD) ages 8-48 months, 71 children with autism spectrum disorder (ASD) ages 16-48 months, and 49 children with language delay (LD) not related to ASD ages 10-44 months. METHOD A small digital recorder in the chest pocket of clothing captured full-day natural child vocalizations, which were automatically identified into consonant, vowel, nonspeech, and silence, producing the average count per utterance (ACPU) for consonant and vowel. RESULTS Clear child utterances were identified with above 72% accuracy. Correlations between machine-estimated and human-transcribed ACPUs were above 0.82. Children with TD produced significantly more consonants and vowels per utterance than did other children. Children with LD produced significantly more consonants but not vowels than did children with ASD. CONCLUSION The authors provide new information on typical and atypical language development in children with TD, ASD, and LD using an automated computational approach.


American Journal of Speech-language Pathology | 2014

Vocal Interaction Between Children With Down Syndrome and Their Parents

Kathy Thiemann-Bourque; Steven F. Warren; Nancy C. Brady; Jill Gilkerson; Jeffrey A. Richards

PURPOSE The purpose of this study was to describe differences in parent input and child vocal behaviors of children with Down syndrome (DS) compared with typically developing (TD) children. The goals were to describe the language learning environments at distinctly different ages in early childhood. METHOD Nine children with DS and 9 age-matched TD children participated; 4 children in each group were ages 9-11 months, and 5 were between 25 and 54 months. Measures were derived from automated vocal analysis. A digital language processor measured the richness of the childs language environment, including number of adult words, conversational turns, and child vocalizations. RESULTS Analyses indicated no significant differences in words spoken by parents of younger versus older children with DS and significantly more words spoken by parents of TD children than parents of children with DS. Differences between the DS and TD groups were observed in rates of all vocal behaviors, with no differences noted between the younger versus older children with DS, and the younger TD children did not vocalize significantly more than the younger DS children. CONCLUSIONS Parents of children with DS continue to provide consistent levels of input across the early language learning years; however, child vocal behaviors remain low after the age of 24 months, suggesting the need for additional and alternative intervention approaches.


Autism Research | 2013

Stability and Validity of an Automated Measure of Vocal Development From Day-Long Samples in Children With and Without Autism Spectrum Disorder

Paul J. Yoder; D. K. Oller; Jeffrey A. Richards; Sharmistha Gray; Jill Gilkerson

Individual difference measures of vocal development may eventually aid our understanding of the variability in spoken language acquisition in children with autism spectrum disorder (ASD). Large samples of child vocalizations may be needed to maximize the stability of vocal development estimates. Day‐long vocal samples can now be automatically analyzed based on acoustic characteristics of speech likeness identified in theoretically driven and empirically cross‐validated quantitative models of typical vocal development. This report indicates that a single day‐long recording can produce a stable estimate for a measure of vocal development that is highly related to expressive spoken language in a group of young children with ASD and in a group that is typically developing. Autism Res 2013, 6: 103–107.

Collaboration


Dive into the Jill Gilkerson's collaboration.

Top Co-Authors

Avatar

Dongxin Xu

University of Colorado Boulder

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Umit H. Yapanel

University of Colorado Boulder

View shared research outputs
Top Co-Authors

Avatar

D. Kimbrough Oller

Konrad Lorenz Institute for Evolution and Cognition Research

View shared research outputs
Top Co-Authors

Avatar

John H. L. Hansen

University of Texas at Dallas

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge