Julia R. Irwin
Southern Connecticut State University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Julia R. Irwin.
Journal of the American Academy of Child and Adolescent Psychiatry | 2003
Sarah M. Horwitz; Julia R. Irwin; Joan M. Bosson Heenan; Jennifer Mendoza; Alice S. Carter
OBJECTIVE To document the prevalence of expressive language delay in relation to age and gender in 12- to 39-month-old children. To document the characteristics, particularly social competence and emotional/behavioral problems, related to deficits in expressive language. METHOD Parents of an age- and sex-stratified random sample of children born at Yale New Haven Hospital between July 1995 and September 1997 who lived in the New Haven Meriden Standard Metropolitan Statistical Area were enrolled when their children were 12 to 39 months of age (79.8% participation;N = 1,189). The main outcome for these analyses is expressive language delay measured by the MacArthur Communicative Development Inventory, short forms. RESULTS Expressive language delays range from 13.5% in 18- to 23-month-olds to 17.5% in children 30 to 36 months of age. By 18 to 23 months, children are more likely to experience delays if they come from environments characterized by low education, low expressiveness, poverty, high levels of parenting stress, and parents who report worry about their childrens language problems. When social competence is adjusted for in the multivariable model, behavior problems are no longer associated with language delay, suggesting that poor social competence rather than behavior problems may be the critical early correlate of low expressive language development. CONCLUSIONS Expressive language delays are prevalent problems that appear to be associated with poor social competence. Given that such problems may be risk factors for social and emotional problems, early identification is critical.
Journal of Autism and Developmental Disorders | 2008
Elizabeth A. Mongillo; Julia R. Irwin; D. H. Whalen; Cheryl Klaiman; Alice S. Carter; Robert T. Schultz
Fifteen children with autism spectrum disorders (ASD) and twenty-one children without ASD completed six perceptual tasks designed to characterize the nature of the audiovisual processing difficulties experienced by children with ASD. Children with ASD scored significantly lower than children without ASD on audiovisual tasks involving human faces and voices, but scored similarly to children without ASD on audiovisual tasks involving nonhuman stimuli (bouncing balls). Results suggest that children with ASD may use visual information for speech differently from children without ASD. Exploratory results support an inverse association between audiovisual speech processing capacities and social impairment in children with ASD.
Child Development | 2011
Julia R. Irwin; Lauren A. Tornatore; Lawrence Brancazio; D. H. Whalen
This study used eye-tracking methodology to assess audiovisual speech perception in 26 children ranging in age from 5 to 15 years, half with autism spectrum disorders (ASD) and half with typical development. Given the characteristic reduction in gaze to the faces of others in children with ASD, it was hypothesized that they would show reduced influence of visual information on heard speech. Responses were compared on a set of auditory, visual, and audiovisual speech perception tasks. Even when fixated on the face of the speaker, children with ASD were less visually influenced than typical development controls. This indicates fundamental differences in the processing of audiovisual speech in children with ASD, which may contribute to their language and communication impairments.
Attention Perception & Psychophysics | 2006
Julia R. Irwin; D. H. Whalen; Carol A. Fowler
Reports of sex differences in language processing are inconsistent and are thought to vary by task type and difficulty. In two experiments, we investigated a sex difference in visual influence on heard speech (the McGurk effect). First, incongruent consonant-vowel stimuli were presented where the visual portion of the signal was brief (100 msec) or full (temporally equivalent to the auditory). Second, to determine whether men and women differed in their ability to extract visual speech information from these brief stimuli, the same stimuli were presented to new participants with an additional visual-only (lipread) condition. In both experiments, women showed a significantly greater visual influence on heard speech than did men for the brief visual stimuli. No sex differences for the full stimuli or in the ability to lipread were found. These findings indicate that the more challenging brief visual stimuli elicit sex differences in the processing of audiovisual speech.
Frontiers in Psychology | 2014
Julia R. Irwin; Lawrence Brancazio
Using eye-tracking methodology, gaze to a speaking face was compared in a group of children with autism spectrum disorders (ASD) and a group with typical development (TD). Patterns of gaze were observed under three conditions: audiovisual (AV) speech in auditory noise, visual only speech and an AV non-face, non-speech control. Children with ASD looked less to the face of the speaker and fixated less on the speakers’ mouth than TD controls. No differences in gaze were reported for the non-face, non-speech control task. Since the mouth holds much of the articulatory information available on the face, these findings suggest that children with ASD may have reduced access to critical linguistic information. This reduced access to visible articulatory information could be a contributor to the communication and language problems exhibited by children with ASD.
Cognitive Neurodynamics | 2012
Julia R. Irwin; Damian G. Stephen
This study analyzed distributions of Euclidean displacements in gaze (i.e. “gaze steps”) to evaluate the degree of componential cognitive constraints on audio-visual speech perception tasks. Children performing these tasks exhibited distributions of gaze steps that were closest to power-law or lognormal distributions, suggesting a multiplicatively interactive, flexible, self-organizing cognitive system rather than a component-dominant stipulated cognitive structure. Younger children and children diagnosed with an autism spectrum disorder (ASD) exhibited distributions that were closer to power-law than lognormal, indicating a reduced degree of self-organized structure. The relative goodness of lognormal fit was also a significant predictor of ASD, suggesting that this type of analysis may point towards a promising diagnostic tool. These results lend further support to an interaction-dominant framework that casts cognitive processing and development in terms of self-organization instead of fixed components and show that these analytical methods are sensitive to important developmental and neuropsychological differences.
Clinical Linguistics & Phonetics | 2015
Julia R. Irwin; Jonathan L. Preston; Lawrence Brancazio; Michael D'angelo; Jacqueline Turcios
Abstract Perception of spoken language requires attention to acoustic as well as visible phonetic information. This article reviews the known differences in audiovisual speech perception in children with autism spectrum disorders (ASD) and specifies the need for interventions that address this construct. Elements of an audiovisual training program are described. This researcher-developed program delivered via an iPad app presents natural speech in the context of increasing noise, but supported with a speaking face. Children are cued to attend to visible articulatory information to assist in perception of the spoken words. Data from four children with ASD ages 8–10 are presented showing that the children improved their performance on an untrained auditory speech-in-noise task.
Developmental Neuropsychology | 2014
Jonathan L. Preston; Peter J. Molfese; Nina Gumkowski; Andrea Sorcinelli; Vanessa Harwood; Julia R. Irwin; Nicole Landi
Event-related potentials (ERPs) were recorded during a picture naming task of simple and complex words in children with typical speech and with childhood apraxia of speech (CAS). Results reveal reduced amplitude prior to speaking complex (multisyllabic) words relative to simple (monosyllabic) words for the CAS group over the right hemisphere during a time window thought to reflect phonological encoding of word forms. Group differences were also observed prior to production of spoken tokens regardless of word complexity during a time window just prior to speech onset (thought to reflect motor planning/programming). Results suggest differences in pre-speech neurolinguistic processes.
Language and Linguistics Compass | 2017
Julia R. Irwin; Lori DiBlasi
This selected overview of audiovisual (AV) speech perception examines the influence of visible articulatory information on what is heard. Thought to be a cross-cultural phenomenon that emerges early in typical language development, variables that influence AV speech perception include properties of the visual and the auditory signal, attentional demands, and individual differences. A brief review of the existing neurobiological evidence on how visual information influences heard speech indicates potential loci, timing, and facilitatory effects of AV over auditory only speech. The current literature on AV speech in certain clinical populations (individuals with an autism spectrum disorder, developmental language disorder, or hearing loss) reveals differences in processing that may inform interventions. Finally, a new method of assessing AV speech that does not require obvious cross-category mismatch or auditory noise was presented as a novel approach for investigators.
Journal of the Acoustical Society of America | 2006
Julia R. Irwin
Children with autism spectrum disorder (ASD) appear to be less influenced by visual speech information than typically developing children, as measured by their responses to mismatching auditory and visual (McGurk) stimuli. This study examined whether this reduction in sensitivity to the McGurk effect is due to eye gaze aversion, a hallmark of ASD. Children with ASD and typically developing controls (TD) were presented with videotaped consonant‐vowel (CV) stimuli. Stimuli were digitally edited to create either an audiovisual (AV) match (AV /ma/ or /na/) or mismatch (audio /ma/ and visual /ga/). Responses were considered visually influenced if participants reported hearing /na/ for the mismatched stimuli. Using eye‐tracking methodology, only trials where participant’s gaze was fixated on the speaker’s face during consonantal closure were included in analyses. Initial analyses reveal, when fixated on the speaker’s face, children with ASD show significantly less visual influence relative to typically developi...