Tess K. Koerner
University of Minnesota
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Tess K. Koerner.
Developmental Science | 2011
Yang Zhang; Tess K. Koerner; Sharon Miller; Zach Grice-Patil; Adam Svec; David Akbari; Liz Tusler; Edward Carney
Speech scientists have long proposed that formant exaggeration in infant-directed speech plays an important role in language acquisition. This event-related potential (ERP) study investigated neural coding of formant-exaggerated speech in 6-12-month-old infants. Two synthetic /i/ vowels were presented in alternating blocks to test the effects of formant exaggeration. ERP waveform analysis showed significantly enhanced N250 for formant exaggeration, which was more prominent in the right hemisphere than the left. Time-frequency analysis indicated increased neural synchronization for processing formant-exaggerated speech in the delta band at frontal-central-parietal electrode sites as well as in the theta band at frontal-central sites. Minimum norm estimates further revealed a bilateral temporal-parietal-frontal neural network in the infant brain sensitive to formant exaggeration. Collectively, these results provide the first evidence that formant expansion in infant-directed speech enhances neural activities for phonetic encoding and language learning.
Hearing Research | 2015
Tess K. Koerner; Yang Zhang
This study investigated the effects of a speech-babble background noise on inter-trial phase coherence (ITPC, also referred to as phase locking value (PLV)) and auditory event-related responses (AERP) to speech sounds. Specifically, we analyzed EEG data from 11 normal hearing subjects to examine whether ITPC can predict noise-induced variations in the obligatory N1-P2 complex response. N1-P2 amplitude and latency data were obtained for the /bu/syllable in quiet and noise listening conditions. ITPC data in delta, theta, and alpha frequency bands were calculated for the N1-P2 responses in the two passive listening conditions. Consistent with previous studies, background noise produced significant amplitude reduction and latency increase in N1 and P2, which were accompanied by significant ITPC decreases in all the three frequency bands. Correlation analyses further revealed that variations in ITPC were able to predict the amplitude and latency variations in N1-P2. The results suggest that trial-by-trial analysis of cortical neural synchrony is a valuable tool in understanding the modulatory effects of background noise on AERP measures.
Brain Sciences | 2017
Tess K. Koerner; Yang Zhang
Neurophysiological studies are often designed to examine relationships between measures from different testing conditions, time points, or analysis techniques within the same group of participants. Appropriate statistical techniques that can take into account repeated measures and multivariate predictor variables are integral and essential to successful data analysis and interpretation. This work implements and compares conventional Pearson correlations and linear mixed-effects (LME) regression models using data from two recently published auditory electrophysiology studies. For the specific research questions in both studies, the Pearson correlation test is inappropriate for determining strengths between the behavioral responses for speech-in-noise recognition and the multiple neurophysiological measures as the neural responses across listening conditions were simply treated as independent measures. In contrast, the LME models allow a systematic approach to incorporate both fixed-effect and random-effect terms to deal with the categorical grouping factor of listening conditions, between-subject baseline differences in the multiple measures, and the correlational structure among the predictor variables. Together, the comparative data demonstrate the advantages as well as the necessity to apply mixed-effects models to properly account for the built-in relationships among the multiple predictor variables, which has important implications for proper statistical modeling and interpretation of human behavior in terms of neural correlates and biomarkers.
Hearing Research | 2016
Tess K. Koerner; Yang Zhang; Peggy B. Nelson; Boxiang Wang; Hui Zou
Successful speech communication requires the extraction of important acoustic cues from irrelevant background noise. In order to better understand this process, this study examined the effects of background noise on mismatch negativity (MMN) latency, amplitude, and spectral power measures as well as behavioral speech intelligibility tasks. Auditory event-related potentials (AERPs) were obtained from 15 normal-hearing participants to determine whether pre-attentive MMN measures recorded in response to a consonant (from /ba/ to /bu/) and vowel change (from /ba/ to /da/) in a double-oddball paradigm can predict sentence-level speech perception. The results showed that background noise increased MMN latencies and decreased MMN amplitudes with a reduction in the theta frequency band power. Differential noise-induced effects were observed for the pre-attentive processing of consonant and vowel changes due to different degrees of signal degradation by noise. Linear mixed-effects models further revealed significant correlations between the MMN measures and speech intelligibility scores across conditions and stimuli. These results confirm the utility of MMN as an objective neural marker for understanding noise-induced variations as well as individual differences in speech perception, which has important implications for potential clinical applications.
Journal of the Acoustical Society of America | 2012
Tess K. Koerner; Yang Zhang; Peggy B. Nelson
Research has shown that the amplitude and latency of neural responses to passive mismatch negativity (MMN) tasks are affected by noise (Billings et al., 2010). Further studies have revealed that informational masking noise results in decreased P3 amplitude and increased P3 latency, which correlates with decreased discrimination abilities and reaction time (Bennett et al., 2012). This study aims to further investigate neural processing of speech in differing types of noise by attempting to correlate MMN neural responses to consonant and vowel stimuli with results from behavioral sentence recognition tasks. Preliminary behavioral data indicate that noise conditions significantly compromise the perception of consonant change in an oddball discrimination task. Noise appears to have less of an effect on the perception of vowel change. The MMN data are being collected for the detection of consonant change and vowel change in different noise conditions. The results will be examined to address how well the pre-at...
Hearing Research | 2017
Tess K. Koerner; Yang Zhang; Peggy B. Nelson; Boxiang Wang; Hui Zou
Abstract This study examined how speech babble noise differentially affected the auditory P3 responses and the associated neural oscillatory activities for consonant and vowel discrimination in relation to segmental‐ and sentence‐level speech perception in noise. The data were collected from 16 normal‐hearing participants in a double‐oddball paradigm that contained a consonant (/ba/ to /da/) and vowel (/ba/ to /bu/) change in quiet and noise (speech‐babble background at a −3 dB signal‐to‐noise ratio) conditions. Time‐frequency analysis was applied to obtain inter‐trial phase coherence (ITPC) and event‐related spectral perturbation (ERSP) measures in delta, theta, and alpha frequency bands for the P3 response. Behavioral measures included percent correct phoneme detection and reaction time as well as percent correct IEEE sentence recognition in quiet and in noise. Linear mixed‐effects models were applied to determine possible brain‐behavior correlates. A significant noise‐induced reduction in P3 amplitude was found, accompanied by significantly longer P3 latency and decreases in ITPC across all frequency bands of interest. There was a differential effect of noise on consonant discrimination and vowel discrimination in both ERP and behavioral measures, such that noise impacted the detection of the consonant change more than the vowel change. The P3 amplitude and some of the ITPC and ERSP measures were significant predictors of speech perception at segmental‐ and sentence‐levels across listening conditions and stimuli. These data demonstrate that the P3 response with its associated cortical oscillations represents a potential neurophysiological marker for speech perception in noise. HighlightsNoise led to reduced and longer P3 response for speech discrimination.Noise had differential effects on consonant and vowel processing.P3 amplitude and associated cortical oscillations predicted speech intelligibility.
Brain Sciences | 2016
Yang Zhang; Bing Cheng; Tess K. Koerner; Robert S. Schlauch; Keita Tanaka; Masaki Kawakatsu; Iku Nemoto; Toshiaki Imada
This magnetoencephalography (MEG) study investigated evoked ON and OFF responses to ramped and damped sounds in normal-hearing human adults. Two pairs of stimuli that differed in spectral complexity were used in a passive listening task; each pair contained identical acoustical properties except for the intensity envelope. Behavioral duration judgment was conducted in separate sessions, which replicated the perceptual bias in favour of the ramped sounds and the effect of spectral complexity on perceived duration asymmetry. MEG results showed similar cortical sites for the ON and OFF responses. There was a dominant ON response with stronger phase-locking factor (PLF) in the alpha (8–14 Hz) and theta (4–8 Hz) bands for the damped sounds. In contrast, the OFF response for sounds with rising intensity was associated with stronger PLF in the gamma band (30–70 Hz). Exploratory correlation analysis showed that the OFF response in the left auditory cortex was a good predictor of the perceived temporal asymmetry for the spectrally simpler pair. The results indicate distinct asymmetry in ON and OFF responses and neural oscillation patterns associated with the dynamic intensity changes, which provides important preliminary data for future studies to examine how the auditory system develops such an asymmetry as a function of age and learning experience and whether the absence of asymmetry or abnormal ON and OFF responses can be taken as a biomarker for certain neurological conditions associated with auditory processing deficits.
American Journal of Audiology | 2016
Dania Rishiq; Aparna Rao; Tess K. Koerner; Harvey B. Abrams
Purpose The goal of this study was to determine whether hearing aids in combination with computer-based auditory training improve audiovisual (AV) performance compared with the use of hearing aids alone. Method Twenty-four participants were randomized into an experimental group (hearing aids plus ReadMyQuips [RMQ] training) and a control group (hearing aids only). The Multimodal Lexical Sentence Test for Adults (Kirk et al., 2012) was used to measure auditory-only (AO) and AV speech perception performance at three signal-to-noise ratios (SNRs). Participants were tested at the time of hearing aid fitting (pretest), after 4 weeks of hearing aid use (posttest I), and again after 4 weeks of RMQ training (posttest II). Results Results did not reveal an effect of training. As expected, interactions were found between (a) modality (AO vs. AV) and SNR and (b) test (pretest vs. posttests) and SNR. Conclusion Data do not show a significant effect of RMQ training on AO or AV performance as measured using the Multimodal Lexical Sentence Test for Adults.
Journal of Speech Language and Hearing Research | 2015
Robert S. Schlauch; Tess K. Koerner; Lynne Marshall
PURPOSE Four functional hearing loss protocols were evaluated. METHOD For each protocol, 30 participants feigned a hearing loss first on an audiogram and then for a screening test that began a threshold search from extreme levels (-10 or 90 dB HL). Two-tone and 3-tone protocols compared thresholds for ascending and descending tones for 2 (0.5 and 1.0 kHz) and 3 (0.5, 1.0, and 2.0 kHz) frequencies, respectively. A noise-band protocol compared an ascending noise-band threshold with that for 2 descending tones (0.5 and 1.0 kHz). A spondee protocol compared an ascending spondee threshold with that for 2 descending tones (0.5 and 1.0 kHz). These measures were repeated without the participants feigning losses. RESULTS With nonfeigning participants, ascending and descending threshold differences were minimal for all protocols. When the participants feigned a loss, the spondee protocol produced the largest average threshold difference (30.8 dB), whereas the other protocols produced smaller differences (19.6-22.2 dB). CONCLUSIONS Using both the screening test and a comparison of the initial audiogram with the screening test, the spondee and 3-tone protocols resulted in 100% true positives and 0% false positives for functional hearing loss. Either of these protocols could be used clinically or in occupational hearing conservation programs.
Journal of the Acoustical Society of America | 2013
Yang Zhang; Bing Cheng; Tess K. Koerner; Christine Cao; Edward Carney; Yue Wang
The ability to detect auditory-visual correspondence in speech is an early hallmark of typical language development. Infants are able to detect audiovisual mismatches for spoken vowels such as /a/ and /i/ as early as 4 months of age. While adult event-related potential (ERP) data have shown an N300 associated with the detection of audiovisual incongruency in speech, it remains unclear whether similar responses can be elicited in infants. The present study collected ERP data in congruent and incongruent audiovisual presentation conditions for /a/ and /i/ from 21 typically developing infants (6~11 month of age) and 12 normal adults (18~45 years). The adult data replicated the N300 in the parietal electrode sites for detecting audiovisual incongruency in speech, and minimum norm estimation (MNE) showed the primary neural generator in the left superior temporal cortex for the N300. Unlike the adults, the infants showed a later N400 response in the centro-frontal electrode sites, and scalp topography as well a...