Torsten Dau
University of Copenhagen
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Torsten Dau.
workshop on applications of signal processing to audio and acoustics | 2013
Tobias May; Torsten Dau
We present a monaural approach to speech segregation that estimates the ideal binary mask (IBM) by combining amplitude modulation spectrogram (AMS) features, pitch-based features and speech presence probability (SPP) features derived from noise statistics. To maintain a high mask estimation accuracy in the presence of various background noises, the system employs environment-specific segregation models and automatically selects the appropriate model for a given input signal. Furthermore, instead of classifying each time-frequency (T-F) unit independently, the a posteriori probabilities of speech and noise presence are evaluated by considering adjacent T-F units. The proposed system achieves high classification accuracy.
Journal of the Acoustical Society of America | 2018
Torsten Dau
Speech intelligibility depends on factors related to the auditory processes involved in sound perception as well as on the acoustic properties of the sound entering the ear. A clear understanding of speech perception in complex acoustic conditions remains a challenge. Here, a computational modeling framework is presented that attempts to predict the speech intelligibility obtained by normal-hearing and hearing-impaired listeners in various adverse conditions. The model combines the concept of envelope frequency selectivity in the auditory processing of the sound with a decision metric that is based either on the signal-to-noise envelope power ratio or a correlation measure. The proposed model is able to account for the effects of stationary background noise, reverberation, nonlinear distortions and noise reduction processing on speech intelligibility. However, due to its simplified auditory preprocessing stages, the model fails to account for the consequences of individual hearing loss on intelligibility. To address this, physiologically inspired extensions of the auditory preprocessing in the model are combined with the modulation-frequency selective processing and the back-end processing that have been successful in the conditions tested with normal-hearing listeners. The goal is to disentangle the consequences of different types of hearing deficits on speech intelligibility in a given acoustic scenario.Speech intelligibility depends on factors related to the auditory processes involved in sound perception as well as on the acoustic properties of the sound entering the ear. A clear understanding of speech perception in complex acoustic conditions remains a challenge. Here, a computational modeling framework is presented that attempts to predict the speech intelligibility obtained by normal-hearing and hearing-impaired listeners in various adverse conditions. The model combines the concept of envelope frequency selectivity in the auditory processing of the sound with a decision metric that is based either on the signal-to-noise envelope power ratio or a correlation measure. The proposed model is able to account for the effects of stationary background noise, reverberation, nonlinear distortions and noise reduction processing on speech intelligibility. However, due to its simplified auditory preprocessing stages, the model fails to account for the consequences of individual hearing loss on intelligibility....
Journal of the Acoustical Society of America | 2018
Torsten Dau; Jonatan Maercher Roersted; Søren Fuglsang; Jens Hjortkjær
Single-trial EEG measures of selective auditory attention have recently suggested the perspective of decoding who a listener is focusing on in multi-talker scenarios. Here, we report results from work within the COCOHA (Cognitive Control of a Hearing Aid) project investigating the possibility of integrating EEG into neuro-steered hearing instruments. Our EEG decoding strategy relies on measuring cortical activity entrained to envelope fluctuations in the attended speech signal. Currently, a major challenge has been to obtain robust EEG measures of selective attention in older hearing-impaired (HI) listeners. We report our recent COCOHA attempts to decode selective attention from the EEG of hearing-impaired (HI) listeners. Aided HI listeners and age-matched normal-hearing controls were presented with competing talkers at 0 dB target-to-masker ratio and instructed to attend to one talker. We show that single-trial decoding accuracies similar to those reported for younger listeners can be obtained with both ...
DAGA 2014: 40th Annual German Congress on Acoustics | 2014
Johannes Käsbach; Tobias May; Nicolas Le Goff; Torsten Dau
7th Forum Acusticum | 2014
Jens Cubick; Sébastien Santurette; Torsten Dau
7th Forum Acusticum | 2014
Federica Bianchi; Sébastien Santurette; Dorothea Wendt; Torsten Dau
Archive | 2008
Gilles Pigasse; Torsten Dau; James M. Harte
J A R O | 2018
Gerard Encina-Llamas; James M. Harte; Torsten Dau; Barbara G. Shinn-Cunningham; Bastian Epp
ARO Midwinter meeting (abstract) | 2018
Daniel E. Wong; Jens Hjortkjær; Enea Ceolini; Søren Vørnle Nielsen; Sergi Rotger Griful; Søren A. Fuglsang; Maria Chait; Thomas Lunner; Torsten Dau; Shih-Chii Liu; Alain de Cheveigné
2018 Joint Conference - Acoustics | 2018
Borys Kowalewski; Tobias May; Michal Fereczkowski; Johannes Zaar; Olaf Strelcyk; Ewen N. MacDonald; Torsten Dau