Rachel Reetzke
University of Texas at Austin
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Rachel Reetzke.
Journal of Speech Language and Hearing Research | 2015
Rachel Reetzke; Xiaobing Zou; Li Sheng; Napoleon Katsos
PURPOSE We examined the association of bilingual exposure with structural and pragmatic language development in Chinese children with autism spectrum disorders (ASDs). METHOD The parents of 54 children with ASD exposed to 1 (n = 31) or 2 (n = 23) Chinese languages completed (a) a questionnaire to evaluate their childs competence in structural language and pragmatic ability in their dominant language (Childrens Communication Checklist-Second Edition; Bishop, 2006), and (b) a questionnaire to assess their childs social functioning (Social Responsiveness Scale; Constantino & Gruber, 2005; Wang, Lee, Chen, & Hsu, 2012). In addition, parents completed thorough interviews regarding the linguistic environment of their children (Language Environment Interview; Hambly & Fombonne, 2011). RESULTS Multivariate analyses of variance revealed that bilingually exposed children with ASD did not demonstrate significantly different performance on any standard measure relative to their monolingual peers. CONCLUSIONS The findings suggest that bilingual language exposure is not associated with additional challenges for the development of the dominant language in children with ASD. The lack of negative associations in our sample is not likely to be due to the comparatively early diagnosis and/or intervention that are available in other countries. We discuss implications for decisions regarding the linguistic environment of children with ASD.
Journal of Neurophysiology | 2017
Zilong Xie; Rachel Reetzke; Bharath Chandrasekaran
While lifelong language experience modulates subcortical encoding of pitch patterns, there is emerging evidence that short-term training introduced in adulthood also shapes subcortical pitch encoding. Here we use a cross-language design to examine the stability of language experience-dependent subcortical plasticity over multiple days. We then examine the extent to which behavioral relevance induced by sound-to-category training leads to plastic changes in subcortical pitch encoding in adulthood relative to adolescence, a period of ongoing maturation of subcortical and cortical auditory processing. Frequency-following responses (FFRs), which reflect phase-locked activity from subcortical neural ensembles, were elicited while participants passively listened to pitch patterns reflective of Mandarin tones. In experiment 1, FFRs were recorded across three consecutive days from native Chinese-speaking (n = 10) and English-speaking (n = 10) adults. In experiment 2, FFRs were recorded from native English-speaking adolescents (n = 20) and adults (n = 15) before, during, and immediately after a session of sound-to-category training, as well as a day after training ceased. Experiment 1 demonstrated the stability of language experience-dependent subcortical plasticity in pitch encoding across multiple days of passive exposure to linguistic pitch patterns. In contrast, experiment 2 revealed an enhancement in subcortical pitch encoding that emerged a day after the sound-to-category training, with some developmental differences observed. Taken together, these findings suggest that behavioral relevance is a critical component for the observation of plasticity in the subcortical encoding of pitch.NEW & NOTEWORTHY We examine the timescale of experience-dependent auditory plasticity to linguistically relevant pitch patterns. We find extreme stability in lifelong experience-dependent plasticity. We further demonstrate that subcortical function in adolescents and adults is modulated by a single session of sound-to-category training. Our results suggest that behavioral relevance is a necessary ingredient for neural changes in pitch encoding to be observed throughout human development. These findings contribute to the neurophysiological understanding of long- and short-term experience-dependent modulation of pitch.
Brain and behavior | 2017
Han G. Yi; Zilong Xie; Rachel Reetzke; Alexandros G. Dimakis; Bharath Chandrasekaran
Scalp‐recorded electrophysiological responses to complex, periodic auditory signals reflect phase‐locked activity from neural ensembles within the auditory system. These responses, referred to as frequency‐following responses (FFRs), have been widely utilized to index typical and atypical representation of speech signals in the auditory system. One of the major limitations in FFR is the low signal‐to‐noise ratio at the level of single trials. For this reason, the analysis relies on averaging across thousands of trials. The ability to examine the quality of single‐trial FFRs will allow investigation of trial‐by‐trial dynamics of the FFR, which has been impossible due to the averaging approach.
PLOS ONE | 2016
Rachel Reetzke; Boji Pak-Wing Lam; Zilong Xie; Li Sheng; Bharath Chandrasekaran
Recognizing speech in adverse listening conditions is a significant cognitive, perceptual, and linguistic challenge, especially for children. Prior studies have yielded mixed results on the impact of bilingualism on speech perception in noise. Methodological variations across studies make it difficult to converge on a conclusion regarding the effect of bilingualism on speech-in-noise performance. Moreover, there is a dearth of speech-in-noise evidence for bilingual children who learn two languages simultaneously. The aim of the present study was to examine the extent to which various adverse listening conditions modulate differences in speech-in-noise performance between monolingual and simultaneous bilingual children. To that end, sentence recognition was assessed in twenty-four school-aged children (12 monolinguals; 12 simultaneous bilinguals, age of English acquisition ≤ 3 yrs.). We implemented a comprehensive speech-in-noise battery to examine recognition of English sentences across different modalities (audio-only, audiovisual), masker types (steady-state pink noise, two-talker babble), and a range of signal-to-noise ratios (SNRs; 0 to -16 dB). Results revealed no difference in performance between monolingual and simultaneous bilingual children across each combination of modality, masker, and SNR. Our findings suggest that when English age of acquisition and socioeconomic status is similar between groups, monolingual and bilingual children exhibit comparable speech-in-noise performance across a range of conditions analogous to everyday listening environments.
Current Biology | 2018
Rachel Reetzke; Zilong Xie; Fernando Llanos; Bharath Chandrasekaran
Although challenging, adults can learn non-native phonetic contrasts with extensive training [1, 2], indicative of perceptual learning beyond an early sensitivity period [3, 4]. Training can alter low-level sensory encoding of newly acquired speech sound patterns [5]; however, the time-course, behavioral relevance, and long-term retention of such sensory plasticity is unclear. Some theories argue that sensory plasticity underlying signal enhancement is immediate and critical to perceptual learning [6, 7]. Others, like the reverse hierarchy theory (RHT), posit a slower time-course for sensory plasticity [8]. RHT proposes that higher-level categorical representations guide immediate, novice learning, while lower-level sensory changes do not emerge until expert stages of learning [9]. We trained 20 English-speaking adults to categorize a non-native phonetic contrast (Mandarin lexical tones) using a criterion-dependent sound-to-category training paradigm. Sensory and perceptual indices were assayed across operationally defined learning phases (novice, experienced, over-trained, and 8-week retention) by measuring the frequency-following response, a neurophonic potential that reflects fidelity of sensory encoding, and the perceptual identification of a tone continuum. Our results demonstrate that while robust changes in sensory encoding and perceptual identification of Mandarin tones emerged with training and were retained, such changes followed different timescales. Sensory changes were evidenced and related to behavioral performance only when participants were over-trained. In contrast, changes in perceptual identification reflecting improvement in categorical percept emerged relatively earlier. Individual differences in perceptual identification, and not sensory encoding, related to faster learning. Our findings support the RHT-sensory plasticity accompanies, rather than drives, expert levels of non-native speech learning.
Archive | 2017
Rachel Reetzke; Zilong Xie; Bharath Chandrasekaran
Literacy acquisition is complex and multifactorial. Successful literacy acquisition places extreme demands on sensory and cognitive processes. Individuals with reading disorders demonstrate a range of linguistic, sensory, and cognitive deficits. In this chapter, the relationship between reading ability and the frequency-following response (FFR) is examined. The utility of the FFR in assessment of successful literacy and reading disorders is reviewed along with the use of FFR as an index of remediation. Finally, the chapter concludes with a discussion of current issues and future directions regarding the utility of the FFR as an objective neural metric of deficits in literacy disorders. Throughout these sections the distinct cognitive, linguistic, and experiential influences on the FFR are highlighted to further demonstrate how the FFR to speech may serve as an auditory biomarker to predict literacy disorders.
Journal of the Acoustical Society of America | 2016
Bharath Chandrasekaran; Rachel Reetzke; Han-Gyol Yi; Jessica Roeder; Zilong Xie; W. Todd Maddox
We conducted a cross-linguistic study to evaluate the impact of language experience on midbrain encoding of acoustic dimensions. Midbrain electrophysiological responses were recorded to the four Mandarin tones in native Chinese (N = 10) and English (N = 10) listeners, through a counter-balanced block design. English participants were trained over multiple days to achieve tone categorization accuracy and reaction time equal to that of the Chinese participants. We assessed the extent to which the four Mandarin tones could be discerned from the electrophysiological responses, using a data-driven machine learning approach. The machine learning output was used to generate dissimilarity matrices that were subjected to a multidimensional scaling (MDS) model. A two dimensional MDS solution emerged that corresponded to “pitch height” and “pitch direction” of the Mandarin tones. Findings derived from the individual differences scaling (INDSCAL) method revealed that, initially, pitch direction was weighted more by t...
Journal of the Acoustical Society of America | 2015
Rachel Reetzke; Todd Maddox; Bharath Chandrasekaran
Auditory categorization is a natural and adaptive process that allows for the organization of high-dimensional, continuous acoustic information into discrete representations. The aim of this study is two-fold: (a) to examine the developmental trajectory of rule-based auditory category learning from childhood through early adulthood, and (b) to examine the extent to which individual differences in rule-based category learning relates to individual differences in executive function. Sixty participants with normal-hearing, 20 children (age range, 7–12), 20 adolescents (age range, 13–19), and 20 young adults (age range, 20–23), learned to categorize novel spectrotemporally modulated sounds using trial-by-trial feedback. The experimental design included six blocks of 100 stimuli for a total of 600 trials. Results revealed that auditory categorization accuracy improved with age, with young adults outperforming children and adolescents. Computational modeling analyses indicated that the use of the task-optimal s...
Journal of the Acoustical Society of America | 2014
Rachel Reetzke; Boji Lam; Zilong Xie; Li Sheng; Bharath Chandrasekaran
Developmental and linguistic factors have been found to influence listeners’ ability to recognize speech-in-noise. However, there is paucity of evidence exploring how these factors modulate speech perception in everyday listening situations, such as multisensory environments and backgrounds with informational maskers. This study assessed sentence recognition for 30 children (14 monolingual, 16 simultaneous bilingual; ages 6–10) and 31 adults (21 monolingual, ten simultaneous bilingual; ages 18–22). Our experimental design included three within-subject variables: (a) masker type: pink noise or two-talker babble, (b) modality: audio-only and audiovisual, and (c) signal-to-noise ratio (SNR): 0 to -16 dB. Results revealed that across both modalities and noise types, adults performed better than children, and simultaneous bilinguals performed similarly to monolinguals. The age effect was largest at the lowest SNRs of -12 and -16 dB in the audiovisual two-talker babble condition. These findings suggest that chi...
Journal of Experimental Child Psychology | 2016
Rachel Reetzke; W. Todd Maddox; Bharath Chandrasekaran