Jeremy L. Loebach
Indiana University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jeremy L. Loebach.
Journal of the Acoustical Society of America | 2008
Jeremy L. Loebach; Tessa Bent; David B. Pisoni
A listeners ability to utilize indexical information in the speech signal can enhance their performance on a variety of speech perception tasks. It is unclear, however, whether such information plays a similar role for spectrally reduced speech signals, such as those experienced by individuals with cochlear implants. The present study compared the effects of training on linguistic and indexical tasks when adapting to cochlear implant simulations. Listening to sentences processed with an eight-channel sinewave vocoder, three separate groups of subjects were trained on a transcription task (transcription), a talker identification task (talker ID), or a gender identification task (gender ID). Pre- to posttest comparisons demonstrated that training produced significant improvement for all groups. Moreover, subjects from the talker ID and transcription training groups performed similarly at posttest and generalization, and significantly better than the subjects from the gender ID training group. These results suggest that training on an indexical task that requires high levels of controlled attention can provide equivalent benefits to training on a linguistic task. When listeners selectively focus their attention on the extralinguistic information in the speech signal, they still extract linguistic information, the degree to which they do so, however, appears to be task dependent.
Journal of Experimental Psychology: Human Perception and Performance | 2010
Jeremy L. Loebach; David B. Pisoni; Mario A. Svirsky
The effect of feedback and materials on perceptual learning was examined in listeners with normal hearing who were exposed to cochlear implant simulations. Generalization was most robust when feedback paired the spectrally degraded sentences with their written transcriptions, promoting mapping between the degraded signal and its acoustic-phonetic representation. Transfer-appropriate processing theory suggests that such feedback was most successful because the original learning conditions were reinstated at testing: Performance was facilitated when both training and testing contained degraded stimuli. In addition, the effect of semantic context on generalization was assessed by training listeners on meaningful or anomalous sentences. Training with anomalous sentences was as effective as that with meaningful sentences, suggesting that listeners were encouraged to use acoustic-phonetic information to identify speech than to make predictions from semantic context.
Journal of the Acoustical Society of America | 2018
Damian Almaraz; Elizabeth Een; Elaine Grafelman; Justin Pacholec; Yadi Quintanilla; Paula Rodriguez; Jeremy L. Loebach
Although cochlear implants have been demonstrated to be effective surgical treatments for deafness, new implant users must undergo an intense period of perceptual learning and adaptation to learn to hear with their prosthesis. Few adult cochlear implant users receive any formal training following implantation, and as a result, the perceptual skills that they develop are highly variable. This study assessed the efficacy of the high variability online training program for adult cochlear implant users dubbed the HiVOlT-CI in 31 normal hearing individuals listening to cochlear implant simulations and three adult cochlear implant users. The goal of our interactive, adaptive, high-variability training program is to provide empirically-based, adaptive and interactive perceptual training to new cochlear implant users to help them adjust to their prosthesis. We also aim to develop a common set of robust and adaptive cognitive auditory skills in cochlear implant users that will help them to hear in real world situa...
Journal of the Acoustical Society of America | 2014
Rachel E. Bash; Brandon J. Cash; Jeremy L. Loebach
The perception of environmental stimuli was compared across normal hearing (NH) listeners exposed to an eight-channel sinewave vocoder and experienced bilateral, unilateral, and bimodal cochlear implant (CI) users. Three groups of NH listeners underwent no training (control), one day of training with environmental stimuli (exposure), or four days of training with a variety of speech and environmental stimuli (experimental). A significant effect of training was observed. The experimental group performed significantly better than exposure or control groups, equal to bilateral CI users, but worse than bimodal users. Participants were divided into low, medium and high-performing groups using a two-step cluster algorithm. High-performing members were only observed for the CI and experimental conditions, and significantly more low-performing members were observed for exposure and control conditions, demonstrating the effectiveness of training. A detailed item-analysis revealed that the most accurately identified sounds were often temporal in nature or contained iconic repeating patterns (e.g., a horse galloping). Easily identified stimuli were common across all groups, with experimental subjects identifying more short or spectrally driven stimuli, and CI users identifying more animal vocalizations. These data demonstrate that explicit training in identifying environmental stimuli improves sound perception, and could be beneficial for new CI users.
Journal of the Acoustical Society of America | 2014
Jeremy L. Loebach; Gina Scharenbroch; Katelyn Berg
Talker intelligibility was compared across clear and sinewave vocoded speech. Ten talkers (5 female) from the Midwest and Western dialect regions recorded samples of 210 meaningful IEEE sentences, 206 semantically anomalous sentences, and 300 MRT words. Ninety-three normal hearing participants provided open set transcriptions of the materials presented in the clear over headphones. Forty-one different normal hearing participants provided open set transcriptions of the materials processed with an eight-channel sinewave vocoder. Transcription accuracy was highest for clear speech compared to vocoded speech, and for meaningful sentences, followed by anomalous sentences and words for both conditions. Weak talker effects were observed for the meaningful sentences in the clear (ranging from 97.7% to 98.2%), but were more pronounced for vocoded versions (68.5% to 85.5%). Weak talker effects were observed for semantically anomalous sentences in the clear (89.4%-93.3%), but more variability was observed across talkers in the vocoded condition (54.4%–73.7%). Finally, stronger talker effects were observed for clear and vocoded MRT words (83.8%–95.6%, 46.3%–59.0%, respectively). Talker rankings differed across stimulus conditions, as well as across processing conditions, but significant positive correlations between conditions were observed for meaningful and anomalous sentences, but not MRT words. Acoustic and dialect influences on intelligibility will be discussed.
Journal of the Acoustical Society of America | 2009
Tessa Bent; Jeremy L. Loebach; Lawrence Phillips; David B. Pisoni
Listeners rapidly adapt to many forms of degraded speech. What level of information drives this adaptation (e.g., acoustic, phonetic, lexical, syntactic), however, remains an open question. In the present study, three groups of listeners were passively exposed to sinewave‐vocoded speech in one of three languages (English, German, or Mandarin) to manipulate the level(s) of information shared between the training languages and testing language (English). Two additional groups were also included to control for procedural learning effects. One control group received no training, while the other control group was trained with spectrally rotated English materials. After training, all listeners transcribed eight‐channel sinewave‐vocoded English sentences. The results demonstrated that listeners exposed to German materials performed equivalently to listeners exposed to English materials. However, listeners exposed to Mandarin materials showed an intermediate level of performance; their scores were not significant...
Journal of the Acoustical Society of America | 2009
Jeremy L. Loebach
Adaptation to the acoustic world after cochlear implantation requires significant adjustment to degraded auditory signals. The cognitive mechanisms underlying the mapping of such signals onto preexisting internal representations (postlingually deafened individuals), or the formation of novel internal representations (prelingually deafened individuals), are not well understood. Therefore, understanding the mechanisms of perceptual learning is critical to providing efficient training and (re)habilitation for new cochlear implant users. The advent of noise and sinewave vocoders to model the output of a cochlear implant speech processor has increased the tools available to investigate perceptual learning of speech in normal‐hearing listeners. A fundamental question is whether training should focus exclusively on speech perception (synthetic approach), or whether training on extralinguistic or nonspeech auditory information (analytic approach) promotes more robust perceptual learning and generalization to nove...
Journal of the Acoustical Society of America | 2001
Jeremy L. Loebach; Robert E. Wickesberg
Sinewave speech is a synthetic variation of natural speech that replaces natural formants with time varying sinusoidal waves. This synthesis preserves the general formant motion of the natural utterance while removing many spectral details; yet sinewave speech tokens can convey a linguistic message [Remez et al., Science 212, 947–950 (1981)]. This study examined the extent that phonetic information in sinusoidal speech is preserved at the level of the auditory periphery. We compared the responses of individual auditory nerve fibers in ketamine anesthetized chinchillas to the natural and sinusoidal tokens of the word /bal/. Comparisons of the first 100 milliseconds of the responses revealed high correlations over the first 20 milliseconds of the responses representing the initial consonant (r=0.82) despite low correlations between the stimuli themselves (r=0.18). Correlation coefficients decreased in the vowel portion of the responses (r=0.30). Spectral reduction in the sinusoidal token may produce the low...
Archive | 2012
Jeremy L. Loebach; Nicholas Altieri; David B. Pisoni
Archive | 2012
Jeremy L. Loebach; Jane Burton; Brianna Sennott; Sarah Phillips; Carly Stork