Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Olaf Strelcyk is active.

Publication


Featured researches published by Olaf Strelcyk.


Journal of the Acoustical Society of America | 2009

Relations between frequency selectivity, temporal fine-structure processing, and speech reception in impaired hearing.

Olaf Strelcyk; Torsten Dau

Frequency selectivity, temporal fine-structure (TFS) processing, and speech reception were assessed for six normal-hearing (NH) listeners, ten sensorineurally hearing-impaired (HI) listeners with similar high-frequency losses, and two listeners with an obscure dysfunction (OD). TFS processing was investigated at low frequencies in regions of normal hearing, through measurements of binaural masked detection, tone lateralization, and monaural frequency modulation (FM) detection. Lateralization and FM detection thresholds were measured in quiet and in background noise. Speech reception thresholds were obtained for full-spectrum and lowpass-filtered sentences with different interferers. Both the HI listeners and the OD listeners showed poorer performance than the NH listeners in terms of frequency selectivity, TFS processing, and speech reception. While a correlation was observed between the monaural and binaural TFS-processing deficits in the HI listeners, no relation was found between TFS processing and frequency selectivity. The effect of noise on TFS processing was not larger for the HI listeners than for the NH listeners. Finally, TFS-processing performance was correlated with speech reception in a two-talker background and lateralized noise, but not in amplitude-modulated noise. The results provide constraints for future models of impaired auditory signal processing.


Journal of the Acoustical Society of America | 2009

Relation between derived-band auditory brainstem response latencies and behavioral frequency selectivity

Olaf Strelcyk; Dimitrios Christoforidis; Torsten Dau

Derived-band click-evoked auditory brainstem responses (ABRs) were obtained for normal-hearing (NH) and sensorineurally hearing-impaired (HI) listeners. The latencies extracted from these responses, as a function of derived-band center frequency and click level, served as objective estimates of cochlear response times. For the same listeners, auditory-filter bandwidths at 2 kHz were estimated using a behavioral notched-noise masking paradigm. Generally, shorter derived-band latencies were observed for the HI than for the NH listeners. Only at low click sensation levels, prolonged latencies were obtained for some of the HI listeners. The behavioral auditory-filter bandwidths accounted for the across-listener variability in the ABR latencies: Cochlear response time decreased with increasing filter bandwidth, consistent with linear-system theory. The results link cochlear response time and frequency selectivity in human listeners and offer a window to better understand how hearing impairment affects the spatiotemporal cochlear response pattern.


Hearing Research | 2011

Relations between perceptual measures of temporal processing, auditory-evoked brainstem responses and speech intelligibility in noise.

Alexandra Papakonstantinou; Olaf Strelcyk; Torsten Dau

This study investigates behavioural and objective measures of temporal auditory processing and their relation to the ability to understand speech in noise. The experiments were carried out on a homogeneous group of seven hearing-impaired listeners with normal sensitivity at low frequencies (up to 1 kHz) and steeply sloping hearing losses above 1 kHz. For comparison, data were also collected for five normal-hearing listeners. Temporal processing was addressed at low frequencies by means of psychoacoustical frequency discrimination, binaural masked detection and amplitude modulation (AM) detection. In addition, auditory brainstem responses (ABRs) to clicks and broadband rising chirps were recorded. Furthermore, speech reception thresholds (SRTs) were determined for Danish sentences in speech-shaped noise. The main findings were: (1) SRTs were neither correlated with hearing sensitivity as reflected in the audiogram nor with the AM detection thresholds which represent an envelope-based measure of temporal resolution; (2) SRTs were correlated with frequency discrimination and binaural masked detection which are associated with temporal fine-structure coding; (3) The wave-V thresholds for the chirp-evoked ABRs indicated a relation to SRTs and the ability to process temporal fine structure. Overall, the results demonstrate the importance of low-frequency temporal processing for speech reception which can be affected even if pure-tone sensitivity is close to normal.


Journal of the Acoustical Society of America | 2009

Estimation of cochlear response times using lateralization of frequency-mismatched tones

Olaf Strelcyk; Torsten Dau

Behavioral and objective estimates of cochlear response times (CRTs) and traveling-wave (TW) velocity were compared for three normal-hearing listeners. Differences between frequency-specific CRTs were estimated via lateralization of pulsed tones that were interaurally mismatched in frequency, similar to a paradigm proposed by Zerlin [(1969). J. Acoust. Soc. Am. 46, 1011-1015]. In addition, derived-band auditory brainstem responses were obtained as a function of derived-band center frequency. The latencies extracted from these responses served as objective estimates of CRTs. Estimates of TW velocity were calculated from the obtained CRTs. The correspondence between behavioral and objective estimates of CRT and TW velocity was examined. For frequencies up to 1.5 kHz, the behavioral method yielded reproducible results, which were consistent with the objective estimates. For higher frequencies, CRT differences could not be estimated with the behavioral method due to limitations of the lateralization paradigm. The method might be useful for studying the spatiotemporal cochlear response pattern in human listeners.


Journal of the Acoustical Society of America | 2012

Restoration of loudness summation and differential loudness growth in hearing-impaired listeners

Olaf Strelcyk; Nazanin Nooraei; Sridhar Kalluri; Brent Edwards

When normal-hearing (NH) listeners compare the loudness of narrowband and wideband sounds presented at identical sound pressure levels, the wideband sound will most often be perceived as louder than the narrowband sound, a phenomenon referred to as loudness summation. Hearing-impaired (HI) listeners typically show less-than-normal loudness summation, due to reduced cochlear compressive gain and degraded frequency selectivity. In the present study, loudness summation at 1 and 3 kHz was estimated monaurally for five NH and eight HI listeners by matching the loudness of narrowband and wideband noise stimuli. The loudness summation was measured as a function both of noise bandwidth and level. The HI listeners were tested unaided and aided using three different compression systems to investigate the possibility of restoring loudness summation in these listeners. A compression system employing level-dependent compression channels yielded the most promising outcome. The present results inform the development of future loudness models and advanced compensation strategies for the hearing impaired.


Journal of the Acoustical Society of America | 2013

Multichannel compression hearing aids: Effect of channel bandwidth on consonant and vowel identification by hearing-impaired listeners

Olaf Strelcyk; Ning Li; Joyce Rodríguez; Sridhar Kalluri; Brent Edwards

Aided consonant and vowel identification was measured in 13 listeners with high-frequency sloping hearing losses. To investigate the influence of compression-channel analysis bandwidth on identification performance independent of the number of channels, performance was compared for three 17-channel compression systems that differed only in terms of their channel bandwidths. One compressor had narrow channels, one had widely overlapping channels, and the third had level-dependent channels. Measurements were done in quiet, in speech-shaped noise, and in a three-talker background. The results showed no effect of channel bandwidth, neither on consonant nor on vowel identification scores. This suggests that channel bandwidth per se has little influence on speech intelligibility when individually prescribed, frequency-varying compressive gain is provided.


Journal of the Acoustical Society of America | 2014

Effects of interferer facing orientation on speech perception by normal-hearing and hearing-impaired listeners

Olaf Strelcyk; Shareka Pentony; Sridhar Kalluri; Brent Edwards

There exist perceptible differences between sound emanating from a talker who faces and a talker who does not face a listener: Sound from a non-facing talker is attenuated and acquires a spectral tilt. The present study assessed the role that these facing-orientation cues play for speech perception. Digit identification for a frontal target talker in the presence of two spatially separated interfering talkers was measured for 10 normal-hearing (NH) and 11 hearing-impaired (HI) listeners. Overall-level differences and spectral tilts were reproduced by means of digital filtering and playback via loudspeakers. Both NH and HI listeners performed significantly better when the interfering talkers were simulated not to be facing them. Spectral tilts and level differences across talkers reduced target-interferer confusions. They enabled the NH listeners to sequentially stream the digits. This was not the case for the HI listeners, who showed smaller benefits, irrespective of whether they were aided by their own hearing aids or not. While hearing-aid amplification increased audibility, it may not have aided target-interferer segregation or target selection. The present results suggest that facing orientation cannot be neglected in the exploration of speech perception in multitalker situations.


Trends in hearing | 2018

Effects of slow- and fast-acting compression on hearing impaired listeners’ consonant-vowel identification in interrupted noise

Borys Kowalewski; Johannes Zaar; Michal Fereczkowski; Ewen N. MacDonald; Olaf Strelcyk; Tobias May; Torsten Dau

There is conflicting evidence about the relative benefit of slow- and fast-acting compression for speech intelligibility. It has been hypothesized that fast-acting compression improves audibility at low signal-to-noise ratios (SNRs) but may distort the speech envelope at higher SNRs. The present study investigated the effects of compression with a nearly instantaneous attack time but either fast (10 ms) or slow (500 ms) release times on consonant identification in hearing-impaired listeners. Consonant–vowel speech tokens were presented at a range of presentation levels in two conditions: in the presence of interrupted noise and in quiet (with the compressor “shadow-controlled” by the corresponding mixture of speech and noise). These conditions were chosen to disentangle the effects of consonant audibility and noise-induced forward masking on speech intelligibility. A small but systematic intelligibility benefit of fast-acting compression was found in both the quiet and the noisy conditions for the lower speech levels. No detrimental effects of fast-acting compression were observed when the speech level exceeded the level of the noise. These findings suggest that fast-acting compression provides an audibility benefit in fluctuating interferers when compared with slow-acting compression while not substantially affecting the perception of consonants at higher SNRs.


Journal of the Acoustical Society of America | 2017

Sequential streaming of speech sounds under normal and impaired hearing

Marion David; Olaf Strelcyk; Andrew J. Oxenham

Segregating and understanding speech within complex auditory scenes remains a major challenge for hearing-impaired (HI) listeners. This study compared the ability of normal-hearing (NH) listeners and HI listeners with mild-to-moderate loss to segregate sequences of speech tokens, consisting of an unvoiced fricative consonant and a vowel (CV), based on a difference in fundamental frequency (F0) and/or vocal tract length (VTL). The CVs were amplified and spectrally shaped to ensure audibility for the HI listeners. In the streaming task, the CV tokens were concatenated into sequences that alternated in F0, VTL or both. The resulting interleaved sequences were preceded by a “word” consisting of two random syllables. The listeners were asked to indicate whether the word (which varied from trial to trial) was present in the interleaved sequence. The word, if present, occurred either within one sequence or across the alternating sequences. Preliminary results showed no difference in performance between the two groups, suggesting that the listeners with mild-to-moderate sensorineural hearing loss are able to use differences in F0 and VTL to segregate speech sounds in situations where there is no temporal overlap between the competing sounds.Segregating and understanding speech within complex auditory scenes remains a major challenge for hearing-impaired (HI) listeners. This study compared the ability of normal-hearing (NH) listeners and HI listeners with mild-to-moderate loss to segregate sequences of speech tokens, consisting of an unvoiced fricative consonant and a vowel (CV), based on a difference in fundamental frequency (F0) and/or vocal tract length (VTL). The CVs were amplified and spectrally shaped to ensure audibility for the HI listeners. In the streaming task, the CV tokens were concatenated into sequences that alternated in F0, VTL or both. The resulting interleaved sequences were preceded by a “word” consisting of two random syllables. The listeners were asked to indicate whether the word (which varied from trial to trial) was present in the interleaved sequence. The word, if present, occurred either within one sequence or across the alternating sequences. Preliminary results showed no difference in performance between the two g...


Archive | 2010

Objective and Behavioral Estimates of Cochlear Response Times in Normal-Hearing and Hearing-Impaired Human Listeners

Olaf Strelcyk; Torsten Dau

Derived-band auditory brainstem responses (ABRs) were obtained in 5 normal-hearing and 12 sensorineurally hearing-impaired listeners. The latencies extracted from these responses as a function of the derived-band center frequency served as objective estimates of cochlear response times. In addition, two behavioral measurements were carried out. In the first experiment, differences between frequency-specific cochlear response times were estimated, using the lateralization of pulsed tones, interaurally mismatched in frequency. In the second experiment, auditory-filter bandwidths were estimated using a notched-noise masking paradigm. The correspondence between objective and behavioral estimates of cochlear response times was examined. An inverse relationship between the ABR latencies and the filter bandwidths could be demonstrated as the result of the larger across-listener variability among the hearing-impaired listeners, as compared to the normal-hearing listeners. The results might be useful for a better understanding of how hearing impairment affects the spatiotemporal cochlear response pattern in human listeners.

Collaboration


Dive into the Olaf Strelcyk's collaboration.

Top Co-Authors

Avatar

Torsten Dau

Technical University of Denmark

View shared research outputs
Top Co-Authors

Avatar

Brent Edwards

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Borys Kowalewski

Technical University of Denmark

View shared research outputs
Top Co-Authors

Avatar

Ewen N. MacDonald

Technical University of Denmark

View shared research outputs
Top Co-Authors

Avatar

Michal Fereczkowski

Technical University of Denmark

View shared research outputs
Top Co-Authors

Avatar

Johannes Zaar

Technical University of Denmark

View shared research outputs
Top Co-Authors

Avatar

Tobias May

University of Copenhagen

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Marion David

University of Minnesota

View shared research outputs
Researchain Logo
Decentralizing Knowledge