Kali Woodruff Carr
Northwestern University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Kali Woodruff Carr.
The Journal of Neuroscience | 2013
Travis White-Schwoch; Kali Woodruff Carr; Samira Anderson; Dana L. Strait; Nina Kraus
Aging results in pervasive declines in nervous system function. In the auditory system, these declines include neural timing delays in response to fast-changing speech elements; this causes older adults to experience difficulty understanding speech, especially in challenging listening environments. These age-related declines are not inevitable, however: older adults with a lifetime of music training do not exhibit neural timing delays. Yet many people play an instrument for a few years without making a lifelong commitment. Here, we examined neural timing in a group of human older adults who had nominal amounts of music training early in life, but who had not played an instrument for decades. We found that a moderate amount (4–14 years) of music training early in life is associated with faster neural timing in response to speech later in life, long after training stopped (>40 years). We suggest that early music training sets the stage for subsequent interactions with sound. These experiences may interact over time to sustain sharpened neural processing in central auditory nuclei well into older age.
Proceedings of the National Academy of Sciences of the United States of America | 2014
Kali Woodruff Carr; Travis White-Schwoch; Adam Tierney; Dana L. Strait; Nina Kraus
Significance Sensitivity to fine timing cues in speech is thought to play a key role in language learning, facilitating the development of phonological processing. In fact, a link between beat synchronization, which requires fine auditory–motor synchrony, and language skills has been found in school-aged children, as well as adults. Here, we show this relationship between beat entrainment and language metrics in preschoolers and use beat synchronization ability to predict the precision of neural encoding of speech syllables in these emergent readers. By establishing links between beat keeping, neural precision, and reading readiness, our results provide an integrated framework that offers insights into the preparative biology of reading. Temporal cues are important for discerning word boundaries and syllable segments in speech; their perception facilitates language acquisition and development. Beat synchronization and neural encoding of speech reflect precision in processing temporal cues and have been linked to reading skills. In poor readers, diminished neural precision may contribute to rhythmic and phonological deficits. Here we establish links between beat synchronization and speech processing in children who have not yet begun to read: preschoolers who can entrain to an external beat have more faithful neural encoding of temporal modulations in speech and score higher on tests of early language skills. In summary, we propose precise neural encoding of temporal modulations as a key mechanism underlying reading acquisition. Because beat synchronization abilities emerge at an early age, these findings may inform strategies for early detection of and intervention for language-based learning disabilities.
PLOS Biology | 2015
Travis White-Schwoch; Kali Woodruff Carr; Elaine C. Thompson; Samira Anderson; Trent Nicol; Ann R. Bradlow; Steven G. Zecker; Nina Kraus
Learning to read is a fundamental developmental milestone, and achieving reading competency has lifelong consequences. Although literacy development proceeds smoothly for many children, a subset struggle with this learning process, creating a need to identify reliable biomarkers of a child’s future literacy that could facilitate early diagnosis and access to crucial early interventions. Neural markers of reading skills have been identified in school-aged children and adults; many pertain to the precision of information processing in noise, but it is unknown whether these markers are present in pre-reading children. Here, in a series of experiments in 112 children (ages 3–14 y), we show brain–behavior relationships between the integrity of the neural coding of speech in noise and phonology. We harness these findings into a predictive model of preliteracy, revealing that a 30-min neurophysiological assessment predicts performance on multiple pre-reading tests and, one year later, predicts preschoolers’ performance across multiple domains of emergent literacy. This same neural coding model predicts literacy and diagnosis of a learning disability in school-aged children. These findings offer new insight into the biological constraints on preliteracy during early childhood, suggesting that neural processing of consonants in noise is fundamental for language and reading development. Pragmatically, these findings open doors to early identification of children at risk for language learning problems; this early identification may in turn facilitate access to early interventions that could prevent a life spent struggling to read.
Developmental Cognitive Neuroscience | 2016
Kali Woodruff Carr; Adam Tierney; Travis White-Schwoch; Nina Kraus
Highlights • The ability to synchronize to a beat is linked to language and literacy skills.• We examine links between preschool synchronization and auditory neural encoding.• We reveal a systematic relationship between synchronization and speech encoding.• Synchronization provides an index of auditory neural precision in early childhood.
Journal of The American Academy of Audiology | 2015
Emily Spitzer; Travis White-Schwoch; Kali Woodruff Carr; Erika Skoe; Nina Kraus
BACKGROUND Click-evoked auditory brainstem responses (ABRs) are a valuable tool for probing auditory system function and development. Although it has long been thought that the human auditory brainstem is fully mature by age 2 yr, recent evidence indicates a prolonged developmental trajectory. PURPOSE The purpose of this study was to determine the time course of ABR maturation in a preschool population and fill a gap in the knowledge of development. RESEARCH DESIGN Using a cross-sectional design, we investigated the effect of age on absolute latencies, interwave latencies, and amplitudes (waves I, III, V) of the click-evoked ABR. STUDY SAMPLE A total of 71 preschoolers (ages 3.12-4.99 yr) participated in the study. All had normal peripheral auditory function and IQ. DATA COLLECTION AND ANALYSIS ABRs to a rarefaction click stimulus presented at 31/sec and 80 dB SPL (73 dB nHL) were recorded monaurally using clinically-standard recording and filtering procedures while the participant sat watching a movie. Absolute latencies, interwave latencies, and amplitudes were then correlated to age. RESULTS Developmental changes were restricted to absolute latencies. Wave V latency decreased significantly with age, whereas wave I and III latencies remained stable, even in this restricted age range. CONCLUSIONS The ABR does not remain static after age 2 yr, as seen by a systematic decrease in wave V latency between ages 3 and 5 yr. This finding suggests that the human brainstem has a continued developmental time course during the preschool years. Latency changes in the age 3-5 yr range should be considered when using ABRs as a metric of hearing health.
Scientific Reports | 2016
Elaine C. Thompson; Kali Woodruff Carr; Travis White-Schwoch; Adam Tierney; Trent Nicol; Nina Kraus
Speech signals contain information in hierarchical time scales, ranging from short-duration (e.g., phonemes) to long-duration cues (e.g., syllables, prosody). A theoretical framework to understand how the brain processes this hierarchy suggests that hemispheric lateralization enables specialized tracking of acoustic cues at different time scales, with the left and right hemispheres sampling at short (25 ms; 40 Hz) and long (200 ms; 5 Hz) periods, respectively. In adults, both speech-evoked and endogenous cortical rhythms are asymmetrical: low-frequency rhythms predominate in right auditory cortex, and high-frequency rhythms in left auditory cortex. It is unknown, however, whether endogenous resting state oscillations are similarly lateralized in children. We investigated cortical oscillations in children (3–5 years; N = 65) at rest and tested our hypotheses that this temporal asymmetry is evident early in life and facilitates recognition of speech in noise. We found a systematic pattern of increasing leftward asymmetry for higher frequency oscillations; this pattern was more pronounced in children who better perceived words in noise. The observed connection between left-biased cortical oscillations in phoneme-relevant frequencies and speech-in-noise perception suggests hemispheric specialization of endogenous oscillatory activity may support speech processing in challenging listening environments, and that this infrastructure is present during early childhood.
Brain and Language | 2017
Kali Woodruff Carr; Ahren B. Fitzroy; Adam Tierney; Travis White-Schwoch; Nina Kraus
HIGHLIGHTSAdolescents performed sensorimotor synchronization task with online feedback.Synchronization during feedback predicted reading skills.Synchronization during feedback correlated with speech‐evoked CAEP maturation.Synchronization during feedback correlated with resting gamma activity maturation. ABSTRACT Speech communication involves integration and coordination of sensory perception and motor production, requiring precise temporal coupling. Beat synchronization, the coordination of movement with a pacing sound, can be used as an index of this sensorimotor timing. We assessed adolescents’ synchronization and capacity to correct asynchronies when given online visual feedback. Variability of synchronization while receiving feedback predicted phonological memory and reading sub‐skills, as well as maturation of cortical auditory processing; less variable synchronization during the presence of feedback tracked with maturation of cortical processing of sound onsets and resting gamma activity. We suggest the ability to incorporate feedback during synchronization is an index of intentional, multimodal timing‐based integration in the maturing adolescent brain. Precision of temporal coding across modalities is important for speech processing and literacy skills that rely on dynamic interactions with sound. Synchronization employing feedback may prove useful as a remedial strategy for individuals who struggle with timing‐based language learning impairments.
Hearing Research | 2017
Elaine C. Thompson; Kali Woodruff Carr; Travis White-Schwoch; Sebastian Otto-Meyer; Nina Kraus
&NA; From bustling classrooms to unruly lunchrooms, school settings are noisy. To learn effectively in the unwelcome company of numerous distractions, children must clearly perceive speech in noise. In older children and adults, speech‐in‐noise perception is supported by sensory and cognitive processes, but the correlates underlying this critical listening skill in young children (3–5 year olds) remain undetermined. Employing a longitudinal design (two evaluations separated by ˜12 months), we followed a cohort of 59 preschoolers, ages 3.0–4.9, assessing word‐in‐noise perception, cognitive abilities (intelligence, short‐term memory, attention), and neural responses to speech. Results reveal changes in word‐in‐noise perception parallel changes in processing of the fundamental frequency (F0), an acoustic cue known for playing a role central to speaker identification and auditory scene analysis. Four unique developmental trajectories (speech‐in‐noise perception groups) confirm this relationship, in that improvements and declines in word‐in‐noise perception couple with enhancements and diminishments of F0 encoding, respectively. Improvements in word‐in‐noise perception also pair with gains in attention. Word‐in‐noise perception does not relate to strength of neural harmonic representation or short‐term memory. These findings reinforce previously‐reported roles of F0 and attention in hearing speech in noise in older children and adults, and extend this relationship to preschool children. HighlightsResults reveal developmental changes in word‐in‐noise perception parallel changes in processing of the fundamental frequency (F0).Improvements in word‐in‐noise perception also pair with gains in attention.These findings reinforce the roles of F0 and attention in hearing speech, and extend them to preschool children.
Language, cognition and neuroscience | 2018
Jessica Slater; Nina Kraus; Kali Woodruff Carr; Adam Tierney; Andrea Azem; Richard Ashley
ABSTRACT Speech rhythms guide perception, especially in noise. We recently revealed that percussionists outperform non-musicians in speech-in-noise perception, with better speech-in-noise perception associated with better rhythm discrimination across a range of rhythmic expertise. Here, we consider rhythm production skills, specifically drumming to a beat (metronome or music) and to sequences (metrical or jittered patterns), as well as speech-in-noise perception in adult percussionists and non-musicians. Given the absence of a regular beat in speech, we hypothesise that processing of sequences is more important for speech-in-noise perception than the ability to entrain to a regular beat. Consistent with our hypothesis, we find that the sequence-based drumming measures predict speech-in-noise perception, above and beyond hearing thresholds and IQ, whereas the beat-based measures do not. Outcomes suggest temporal patterns may help disambiguate speech under degraded listening conditions, extending theoretical considerations about speech rhythm to the everyday challenge of listening in noise.
Journal of the Acoustical Society of America | 2018
Kali Woodruff Carr; Beverly A. Wright
For perceptual learning on fine-grained discrimination tasks, improvement can be enhanced or disrupted when two tasks are trained, depending on how the training on those tasks is distributed. To investigate if this phenomenon extends to speech learning, we trained native-English speakers to transcribe both Mandarin- and Turkish-accented English sentences using one of three different configurations of the same training stimuli. After training, all trained groups performed better than untrained controls, and similarly to each other, on a novel talker of a trained accent (Mandarin). However, for a novel accent (Slovakian), performance was better than untrained controls when training alternated between the two accents, but not when the two accents were trained consecutively. Performance for the novel accent decreased as the number of contiguous sentences per accent during training increased. One interpretation of these results is that accent information is integrated during a restricted time window. If two ac...