Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Carla L. Youngdahl is active.

Publication


Featured researches published by Carla L. Youngdahl.


Journal of the Acoustical Society of America | 2013

Role and relative contribution of temporal envelope and fine structure cues in sentence recognition by normal-hearing listeners

Frédéric Apoux; Sarah E. Yoho; Carla L. Youngdahl; Eric W. Healy

The present study investigated the role and relative contribution of envelope and temporal fine structure (TFS) to sentence recognition in noise. Target and masker stimuli were added at five different signal-to-noise ratios (SNRs) and filtered into 30 contiguous frequency bands. The envelope and TFS were extracted from each band by Hilbert decomposition. The final stimuli consisted of the envelope of the target/masker sound mixture at x dB SNR and the TFS of the same sound mixture at y dB SNR. A first experiment showed a very limited contribution of TFS cues, indicating that sentence recognition in noise relies almost exclusively on temporal envelope cues. A second experiment showed that replacing the carrier of a sound mixture with noise (vocoder processing) cannot be considered equivalent to disrupting the TFS of the target signal by adding a background noise. Accordingly, a re-evaluation of the vocoder approach as a model to further understand the role of TFS cues in noisy situations may be necessary. Overall, these data are consistent with the view that speech information is primarily extracted from the envelope while TFS cues are primarily used to detect glimpses of the target.


Journal of the Acoustical Society of America | 2013

Can envelope recovery account for speech recognition based on temporal fine structure

Frédéric Apoux; Carla L. Youngdahl; Sarah E. Yoho; Eric W. Healy

Over the past decade, several studies have demonstrated that normal-hearing listeners can achieve high levels of speech recognition when presented with only the temporal fine structure (TFS) of speech stimuli. Initial suggestions to explain these findings were that they were the result of the auditory system’s ability to recover envelope information from the TFS (envelope recovery; ER). A number of studies have since showed decreasing ER with increasing numbers of analysis filters (the filters used to decompose the signal) while intelligibility from speech-TFS remains almost unaffected. Accordingly, it is now assumed that speech information is present in the TFS. A recent psychophysical study, however, showed that envelope information remains in the TFS after decomposition, suggesting a possible role of ER in speech-TFS understanding. The present study investigated this potential role. In contrast to previous work, a clear influence of analysis filter bandwidth on speech-TFS understanding was established....


Journal of the Acoustical Society of America | 2014

An algorithm to improve speech recognition in noise for hearing-impaired listeners: Consonant identification and articulatory feature transmission

Eric W. Healy; Sarah E. Yoho; Yuxuan Wang; Frédéric Apoux; Carla L. Youngdahl; DeLiang Wang

Previous work has shown that a supervised-learning algorithm estimating the ideal binary mask (IBM) can improve sentence intelligibility in noise for hearing-impaired (HI) listeners from scores below 30% to above 80% [Healy et al., J. Acoust. Soc. Am. 134 (2013)]. The algorithm generates a binary mask by using a deep neural network to classify speech-dominant and noise-dominant time-frequency units. In the current study, these results are extended to consonant recognition, in order to examine the specific speech cues responsible for the observed performance improvements. Consonant recognition in speech-shaped noise or babble was examined in normal-hearing and HI listeners in three conditions: unprocessed, noise removed via the IBM, and noise removed via the classification-based algorithm. The IBM demonstrated substantial performance improvements, averaging up to 45% points. The algorithm also produced sizeable gains, averaging up to 34% points. An information-transmission analysis of cues associated with ...


Journal of Speech Language and Hearing Research | 2018

The Effect of Remote Masking on the Reception of Speech by Young School-Age Children.

Carla L. Youngdahl; Eric W. Healy; Sarah E. Yoho; Frédéric Apoux; Rachael Frush Holt

Purpose Psychoacoustic data indicate that infants and children are less likely than adults to focus on a spectral region containing an anticipated signal and are more susceptible to remote masking of a signal. These detection tasks suggest that infants and children, unlike adults, do not listen selectively. However, less is known about childrens ability to listen selectively during speech recognition. Accordingly, the current study examines remote masking during speech recognition in children and adults. Method Adults and 7- and 5-year-old children performed sentence recognition in the presence of various spectrally remote maskers. Intelligibility was determined for each remote-masker condition, and performance was compared across age groups. Results It was found that speech recognition for 5-year-olds was reduced in the presence of spectrally remote noise, whereas the maskers had no effect on the 7-year-olds or adults. Maskers of different bandwidth and remoteness had similar effects. Conclusions In accord with psychoacoustic data, young children do not appear to focus on a spectral region of interest and ignore other regions during speech recognition. This tendency may help account for their typically poorer speech perception in noise. This study also appears to capture an important developmental stage, during which a substantial refinement in spectral listening occurs.


Journal of the Acoustical Society of America | 2016

Effects of interleaved noise on speech recognition in children

Carla L. Youngdahl; Sarah E. Yoho; Rachael Frush Holt; Frédéric Apoux; Eric W. Healy

Normal-hearing adults can isolate frequency regions containing clean speech from surrounding regions containing noise. However, children have been shown to integrate information over a large number of auditory filters, and so they may not be able to isolate frequency regions as well. To assess children’s level of auditory filter independence, words were filtered into 30 contiguous 1-ERB-width bands. Speech was presented in every other band, for a total of 15 speech bands. Speech-shaped noise was then added to: all 30 contiguous bands, the 15 bands not containing speech (OFF), or the 15 bands containing speech (ON). Three age groups were tested: 10 adults, 9 older children (6–7 yr old), and 9 younger children (5 yr old). Consistent with previous findings involving consonant recognition, adults displayed large performance differences between off- and on-frequency noise (OFF vs. ON). The 6- to 7-yr-old group performed similarly to adults. In contrast, the 5-yr-old group displayed equivalent performance in th...


Journal of the Acoustical Society of America | 2015

Effect of spectrally-remote maskers on sentence recognition by adults and children

Carla L. Youngdahl; Sarah E. Yoho; Rachael Frush Holt; Frédéric Apoux; Eric W. Healy

Adults display improved detection of a signal in noise when the spectral frequency of that signal is known, relative to when it is unknown. In contrast, infants do not display this improvement, suggesting that they monitor all frequencies equally, even when it is not advantageous to do so. To assess the impact of this “spectral attention” development during speech recognition, sentences in noise were lowpass filtered at 1500 Hz and presented along with spectrally remote low-noise noise maskers that produced no spectral overlap of peripheral excitation. As anticipated, sentence recognition by adults was not affected by the presence of remote maskers, irrespective of their bandwidth and spectral location. This result was also observed in a group of 7-year-old children. However, the youngest children tested (5-year-olds) displayed poorer sentence recognition in the presence of the remote maskers, suggesting that they were unable to focus attention on the spectral region of speech. The current results suggest...


Journal of the Acoustical Society of America | 2014

Dual-carrier vocoder: Evidence of a primary role of temporal fine structure in streaming

Frédéric Apoux; Carla L. Youngdahl; Sarah E. Yoho; Eric W. Healy

Thus far, two possible roles of temporal fine structure (TFS) have been suggested for speech recognition. A first role is to provide acoustic speech information. A second role is to assist in identifying which auditory channels are dominated by the target signal so that the output of these channels can be combined at a later stage to reconstruct the internal representation of that target. Our most recent work has been largely in contradiction with the speech-information hypothesis, as we generally observe that normal-hearing (NH) listeners do not rely on the TFS of the target speech signal to obtain speech information. However, direct evidence that NH listeners do rely on the TFS to extract the target speech signal from the background is still lacking. The present study was designed to provide such evidence. A dual-carrier vocoder was implemented to assess the role of TFS cues in streaming. To our knowledge, this is the only strategy allowing TFS cues to be provided without transmitting speech information...


Journal of the Acoustical Society of America | 2014

Evidence for independent time-unit processing of speech using noise promoting or suppressing masking release

Eric W. Healy; Carla L. Youngdahl; Frédéric Apoux

The relative independence of time-unit processing during speech reception was examined. It was found that temporally interpolated noise, even at very high levels, had little effect on sentence recognition using masking-release conditions similar to those of Kwon et al. [(2012). J. Acoust. Soc. Am. 131, 3111-3119]. The current data confirm the earlier conclusions of Kwon et al. involving masking release based on the relative timing of speech and noise. These data also indicate substantial levels of independence in the time domain, which has implications for current theories of speech perception in noise.


Journal of the Acoustical Society of America | 2013

Talker effects in speech band importance functions

Eric W. Healy; Sarah E. Yoho; Carla L. Youngdahl; Frédéric Apoux

The literature is somewhat mixed with regard to the influence of (a) the particular speech material (e.g., sentences or words) versus (b) the particular talker used to create the recordings, on band-importance function (BIF) shape. One possibility is that previous techniques for creating BIFs are not sensitive enough to reveal these influences. In the current investigation, the role of talkers was examined using the compound technique for creating BIFs. This technique was developed to account for the multitude of synergistic and redundant interactions that take place among various speech frequencies. The resulting functions display a complex microstructure, in which the importance of adjacent bands can differ substantially. It was found that the microstructure could be traced to acoustic aspects of the particular talkers employed. Further, BIFs for IEEE sentences based on ten-talker recordings displayed less microstructure and were therefore smoother than BIFs based on one such talker. These results toget...


Journal of the Acoustical Society of America | 2015

Dual-carrier processing to convey temporal fine structure cues: Implications for cochlear implants

Frédéric Apoux; Carla L. Youngdahl; Sarah E. Yoho; Eric W. Healy

Collaboration


Dive into the Carla L. Youngdahl's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge