Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Bernhard Laback is active.

Publication


Featured researches published by Bernhard Laback.


Ear and Hearing | 2004

Sensitivity to interaural level and envelope time differences of two bilateral cochlear implant listeners using clinical sound processors.

Bernhard Laback; Stefan-Marcel Pok; Wolf-Dieter Baumgartner; Werner A. Deutsch; Karin Schmid

Objectives: To assess the sensitivity of two bilateral cochlear implant users to interaural level and time differences (ILDs and ITDs) for various signals presented through the auxiliary inputs of clinical sound processors that discard fine timing information and only preserve envelope cues. Design: In a lateralization discrimination experiment, the just noticeable difference (JND) for ILDs and envelope ITDs was measured by means of an adaptive 2-AFC method. Different stimuli were used, including click trains at varying repetition rates, a speech fragment, and noise bursts. For one cochlear implant listener and one stimulus, the sensitivity to envelope ITDs was also determined with the method of constant stimuli. The dependency of ILD-JNDs on the interaural place difference was studied with stimulation at single electrode pairs by using sinusoidal input signals in combination with appropriate single-channel processor fittings. In a lateralization position experiment, subjects were required to use a visual pointer on a computer screen to indicate in-the-head positions for blocks of stimuli containing either ILD or ITD cues. All stimuli were loudness balanced (before applying ILD) and fed directly into the auxiliary inputs of the BTE processors (TEMPO+, Med-El Corp.). The automatic gain control and the processors’ microphones were deactivated. Results: Both cochlear implant listeners were highly sensitive to ILDs in all broadband stimuli used; JNDs approached those of normal-hearing listeners. Pitch-matched single electrode pairs showed significantly lower ILD-JNDs than pitch-mismatched electrode pairs. Envelope ITD-JNDs of cochlear implant listeners obtained with the adaptive method were substantially higher and showed a higher test-retest variability than waveform ITD-JNDs of normal-hearing control listeners and envelope ITD-JNDs of normal-hearing listeners reported in the literature for comparable signals. The envelope ITD-JNDs for the click trains were significantly lower than for the speech token or the noise bursts. The best envelope ITD-JND measured was ca. 250 &mgr;s for the click train at 100 cycles per sec. Direct measurement of the psychometric function for envelope ITD by the method of constant stimuli showed discrimination above chance level down to 150 &mgr;s. The lateralization position experiment showed that both ILDs and envelope ITDs can lead to monotonic changes in lateral percept. Conclusions: The two cochlear implant users tested showed strong effects of ILDs in various broadband stimuli with respect to JNDs as well as lateralization position. The high dependency of ILD-JNDs on the interaural pitch difference suggests the potential importance of pitch-matched assignment of electrodes in the two ears by the speech processors. Envelope ITDs appear to be more ambiguous cues than ILDs, as reflected by the higher and more variable JNDs compared with normal-hearing listeners. The envelope ITD-JNDs of cochlear implant listeners depended on the stimulus.


Journal of the Acoustical Society of America | 2006

Effects of interaural time differences in fine structure and envelope on lateral discrimination in electric hearing.

Piotr Majdak; Bernhard Laback; Wolf-Dieter Baumgartner

Bilateral cochlear implant (CI) listeners currently use stimulation strategies which encode interaural time differences (ITD) in the temporal envelope but which do not transmit ITD in the fine structure, due to the constant phase in the electric pulse train. To determine the utility of encoding ITD in the fine structure, ITD-based lateralization was investigated with four CI listeners and four normal hearing (NH) subjects listening to a simulation of electric stimulation. Lateralization discrimination was tested at different pulse rates for various combinations of independently controlled fine structure ITD and envelope ITD. Results for electric hearing show that the fine structure ITD had the strongest impact on lateralization at lower pulse rates, with significant effects for pulse rates up to 800 pulses per second. At higher pulse rates, lateralization discrimination depended solely on the envelope ITD. The data suggest that bilateral CI listeners benefit from transmitting fine structure ITD at lower pulse rates. However, there were strong interindividual differences: the better performing CI listeners performed comparably to the NH listeners.


Journal of the Acoustical Society of America | 2007

Lateralization discrimination of interaural time delays in four-pulse sequences in electric and acoustic hearing.

Bernhard Laback; Piotr Majdak; Wolf-Dieter Baumgartner

This study examined the sensitivity of four cochlear implant (CI) listeners to interaural time difference (ITD) in different portions of four-pulse sequences in lateralization discrimination. ITD was present either in all the pulses (referred to as condition Wave), the two middle pulses (Ongoing), the first pulse (Onset), the last pulse (Offset), or both the first and last pulse (Gating). All ITD conditions were tested at different pulse rates (100, 200, 400, and 800 pulses/s pps). Also, five normal hearing (NH) subjects were tested, listening to an acoustic simulation of CI stimulation. All CI and NH listeners were sensitive in condition Gating at all pulse rates for which they showed sensitivity in condition Wave. The sensitivity in condition Onset increased with the pulse rate for three CI listeners as well as for all NH listeners. The performance in condition Ongoing varied over the subjects. One CI listener showed sensitivity up to 800 pps, two up to 400 pps, and one at 100 pps only. The group of NH listeners showed sensitivity up to 200 pps. The result that CI listeners detect ITD from the middle pulses of short trains indicates the relevance of fine timing of stimulation pulses in lateralization and therefore in CI stimulation strategies.


Proceedings of the National Academy of Sciences of the United States of America | 2008

Binaural jitter improves interaural time-difference sensitivity of cochlear implantees at high pulse rates.

Bernhard Laback; Piotr Majdak

Interaural time difference (ITD) arises whenever a sound outside of the median plane arrives at the two ears. There is evidence that ITD in the rapidly varying fine structure of a sound is most important for sound localization and for understanding speech in noise. Cochlear implants (CIs), neural prosthetic devices that restore hearing in the profoundly deaf, are increasingly implanted to both ears to provide implantees with the advantages of binaural hearing. CI listeners have been shown to be sensitive to fine structure ITD at low pulse rates, but their sensitivity declines at higher pulse rates that are required for speech coding. We hypothesize that this limitation in electric stimulation is at least partially due to binaural adaptation associated with periodic stimulation. Here, we show that introducing binaurally synchronized jitter in the stimulation timing causes large improvements in ITD sensitivity at higher pulse rates. Our experimental results demonstrate that a purely temporal trigger can cause recovery from binaural adaptation. Thus, binaurally jittered stimulation may improve several aspects of binaural hearing in bilateral recipients of neural auditory prostheses.


IEEE Transactions on Audio, Speech, and Language Processing | 2010

Time–Frequency Sparsity by Removing Perceptually Irrelevant Components Using a Simple Model of Simultaneous Masking

Peter Balazs; Bernhard Laback; Gerhard Eckel; Werner A. Deutsch

We present an algorithm for removing time-frequency components, found by a standard Gabor transform, of a ldquoreal-worldrdquo sound while causing no audible difference to the original sound after resynthesis. Thus, this representation is made sparser. The selection of removable components is based on a simple model of simultaneous masking in the auditory system. Important goals were the applicability to any real-world music and speech sound, integrating mutual masking effects between time-frequency components, coping with the time-frequency spread of such an operation, and computational efficiency. The proposed algorithm first determines an estimation of the masked threshold within an analysis window. The masked threshold function is then shifted in level by an amount determined experimentally, and all components falling below this function (the irrelevance threshold) are removed. This shift gives a conservative way to deal with uncertainty effects resulting from removing time-frequency components and with inaccuracies in the masking model. The removal of components is described as an adaptive Gabor multiplier. Thirty-six normal hearing subjects participated in an experiment to determine the maximum shift value for which they could not discriminate the irrelevance filtered signal from the original signal. On average across the test stimuli, 32 percent of the time-frequency components fell below the irrelevance threshold.


Attention Perception & Psychophysics | 2010

3-D Localization of Virtual Sound Sources: Effects of Visual Environment, Pointing Method, and Training

Piotr Majdak; Matthew J. Goupell; Bernhard Laback

The ability to localize sound sources in three-dimensional space was tested in humans. In Experiment 1, naive subjects listened to noises filtered with subject-specific head-related transfer functions. The tested conditions included the pointing method (head or manual pointing) and the visual environment (VE; darkness or virtual VE). The localization performance was not significantly different between the pointing methods. The virtual VE significantly improved the horizontal precision and reduced the number of front-back confusions. These results show the benefit of using a virtual VE in sound localization tasks. In Experiment 2, subjects were provided with sound localization training. Over the course of training, the performance improved for all subjects, with the largest improvements occurring during the first 400 trials. The improvements beyond the first 400 trials were smaller. After the training, there was still no significant effect of pointing method, showing that the choice of either head- or manual-pointing method plays a minor role in sound localization performance. The results of Experiment 2 reinforce the importance of perceptual training for at least 400 trials in sound localization studies.


Ear and Hearing | 2011

Two-dimensional localization of virtual sound sources in cochlear-implant listeners.

Piotr Majdak; Matthew J. Goupell; Bernhard Laback

Objective: To test localization of sound sources in horizontal and vertical dimensions in cochlear-implant (CI) listeners using clinical bilateral CI systems. Design: Five bilateral CI subjects listened via their clinical speech processors to noises filtered with subject-specific, behind-the-ear microphones and head-related transfer functions. Subjects were immersed in a visual virtual environment presented via a head-mounted display. Subjects used a manual pointer to respond to the perceived sound location and received visual response feedback via the head-mounted display during the tests. The target positions were randomly distributed in two-dimensional space over an azimuth range of 0° to 360° and over an elevation range of −30° to +80°. In experiment 1, the signal level was roved in the range of ±2.5 dB from trial to trial. In experiment 2, the signal level was roved in the range of ±5 dB. Results: CI subjects were generally worse at sound localization than normal-hearing listeners tested in a previous study, in both the horizontal and vertical dimensions. In the horizontal plane, subjects could determine the correct side and locate the target within the side at better than chance performance. In the vertical plane, with a smaller level-roving range, subjects could determine the correct hemifield at better than chance performance but could not locate the target within the correct hemifield. The target angle and response angle were correlated as expected. The response angle and signal level range were also correlated, raising concerns that subjects were using only level cues for the task. With a larger level-roving range, the number of front-back confusions increased. The correlation between the target and response angles decreased, whereas the correlation between the level and response angle did not change, which is an indication that the subjects were relying heavily on level cues. Conclusions: For the horizontal plane, the results are in agreement with previous CI studies performed in the horizontal plane with a comparable range of targets. For the vertical plane, CI listeners could discriminate front from back at better than chance performance; however, there are strong indications that the broadband level, not the spectral profile, was used as the primary localization cue. This study indicates the necessity of new CI processing strategies that encode spectral localization cues.


Archive | 2013

Assessment of Sagittal-Plane Sound Localization Performance in Spatial-Audio Applications

Robert Baumgartner; Piotr Majdak; Bernhard Laback

Sound localization in sagittal planes, SPs, including front-back discrimination, relies on spectral cues resulting from the filtering of incoming sounds by the torso, head and pinna. While acoustic spectral features are well-described by head-related transfer functions, HRTFs, models for SP localization performance have received little attention. In this article, a model predicting SP localization performance of human listeners is described. Listener-specific calibrations are provided for 17 listeners as a basis to predict localization performance in various applications. In order to demonstrate the potential of this listener-specific model approach, predictions for three applications are provided, namely, the evaluation of non-individualized HRTFs for binaural recordings, the assessment of the quality of spatial cues for the design of hearing-assist devices and the estimation and improvement of the perceived direction of phantom sources in surround-sound systems.


Journal of the Acoustical Society of America | 2013

Effect of long-term training on sound localization performance with spectrally warped and band-limited head-related transfer functions

Piotr Majdak; Thomas E. Walder; Bernhard Laback

Sound localization in the sagittal planes, including the ability to distinguish front from back, relies on spectral features caused by the filtering effects of the head, pinna, and torso. It is assumed that important spatial cues are encoded in the frequency range between 4 and 16 kHz. In this study, in a double-blind design and using audio-visual training covering the full 3-D space, normal-hearing listeners were trained 2 h per day over three weeks to localize sounds which were either band limited up to 8.5 kHz or spectrally warped from the range between 2.8 and 16 kHz to the range between 2.8 and 8.5 kHz. The training effect for the warped condition exceeded that for procedural task learning, suggesting a stable auditory recalibration due to the training. After the training, performance with band-limited sounds was better than that with warped ones. The results show that training can improve sound localization in cases where spectral cues have been reduced by band-limiting or remapped by warping. This suggests that hearing-impaired listeners, who have limited access to high frequencies, might also improve their localization ability when provided with spectrally warped or band-limited sounds and adequately trained on sound localization.


Journal of the Acoustical Society of America | 2010

Median-plane sound localization as a function of the number of spectral channels using a channel vocoder

Matthew J. Goupell; Piotr Majdak; Bernhard Laback

Using a vocoder, median-plane sound localization performance was measured in eight normal-hearing listeners as a function of the number of spectral channels. The channels were contiguous and logarithmically spaced in the range from 0.3 to 16 kHz. Acutely testing vocoded stimuli showed significantly worse localization compared to noises and 100 pulses click trains, both of which were tested after feedback training. However, localization for the vocoded stimuli was better than chance. A second experiment was performed using two different 12-channel spacings for the vocoded stimuli, now including feedback training. One spacing was from experiment 1. The second spacing (called the speech-localization spacing) assigned more channels to the frequency range associated with speech. There was no significant difference in localization between the two spacings. However, even with training, localizing 12-channel vocoded stimuli remained worse than localizing virtual wideband noises by 4.8 degrees in local root-mean-square error and 5.2% in quadrant error rate. Speech understanding for the speech-localization spacing was not significantly different from that for a typical spacing used by cochlear-implant users. These experiments suggest that current cochlear implants have a sufficient number of spectral channels for some vertical-plane sound localization capabilities, albeit worse than normal-hearing listeners, without loss of speech understanding.

Collaboration


Dive into the Bernhard Laback's collaboration.

Top Co-Authors

Avatar

Piotr Majdak

Austrian Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Thibaud Necciari

Austrian Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Peter Balazs

Austrian Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Robert Baumgartner

Austrian Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Sophie Savel

Aix-Marseille University

View shared research outputs
Top Co-Authors

Avatar

Sølvi Ystad

Aix-Marseille University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sabine Meunier

Aix-Marseille University

View shared research outputs
Top Co-Authors

Avatar

Werner A. Deutsch

Austrian Academy of Sciences

View shared research outputs
Researchain Logo
Decentralizing Knowledge