Sara Miay Kim Madsen
Technical University of Denmark
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Sara Miay Kim Madsen.
Trends in hearing | 2014
Sara Miay Kim Madsen; Brian C. J. Moore
The signal processing and fitting methods used for hearing aids have mainly been designed to optimize the intelligibility of speech. Little attention has been paid to the effectiveness of hearing aids for listening to music. Perhaps as a consequence, many hearing-aid users complain that they are not satisfied with their hearing aids when listening to music. This issue inspired the Internet-based survey presented here. The survey was designed to identify the nature and prevalence of problems associated with listening to live and reproduced music with hearing aids. Responses from 523 hearing-aid users to 21 multiple-choice questions are presented and analyzed, and the relationships between responses to questions regarding music and questions concerned with information about the respondents, their hearing aids, and their hearing loss are described. Large proportions of the respondents reported that they found their hearing aids to be helpful for listening to both live and reproduced music, although less so for the former. The survey also identified problems such as distortion, acoustic feedback, insufficient or excessive gain, unbalanced frequency response, and reduced tone quality. The results indicate that the enjoyment of listening to music with hearing aids could be improved by an increase of the input and output dynamic range, extension of the low-frequency response, and improvement of feedback cancellation and automatic gain control systems.
Scientific Reports | 2017
Sara Miay Kim Madsen; Kelly L. Whiteford; Andrew J. Oxenham
Recent studies disagree on whether musicians have an advantage over non-musicians in understanding speech in noise. However, it has been suggested that musicians may be able to use differences in fundamental frequency (F0) to better understand target speech in the presence of interfering talkers. Here we studied a relatively large (N = 60) cohort of young adults, equally divided between non-musicians and highly trained musicians, to test whether the musicians were better able to understand speech either in noise or in a two-talker competing speech masker. The target speech and competing speech were presented with either their natural F0 contours or on a monotone F0, and the F0 difference between the target and masker was systematically varied. As expected, speech intelligibility improved with increasing F0 difference between the target and the two-talker masker for both natural and monotone speech. However, no significant intelligibility advantage was observed for musicians over non-musicians in any condition. Although F0 discrimination was significantly better for musicians than for non-musicians, it was not correlated with speech scores. Overall, the results do not support the hypothesis that musical training leads to improved speech intelligibility in complex speech or noise backgrounds.
Trends in hearing | 2018
Andreu Paredes-Gallardo; Sara Miay Kim Madsen; Torsten Dau; Jeremy Marozeau
The role of temporal cues in sequential stream segregation was investigated in cochlear implant (CI) listeners using a delay detection task composed of a sequence of bursts of pulses (B) on a single electrode interleaved with a second sequence (A) presented on the same electrode with a different pulse rate. In half of the trials, a delay was added to the last burst of the otherwise regular B sequence and the listeners were asked to detect this delay. As a jitter was added to the period between consecutive A bursts, time judgments between the A and B sequences provided an unreliable cue to perform the task. Thus, the segregation of the A and B sequences should improve performance. The pulse rate difference and the duration of the sequences were varied between trials. The performance in the detection task improved by increasing both the pulse rate differences and the sequence duration. This suggests that CI listeners can use pulse rate differences to segregate sequential sounds and that a segregated percept builds up over time. In addition, the contribution of place versus temporal cues for voluntary stream segregation was assessed by combining the results from this study with those from our previous study, where the same paradigm was used to determine the role of place cues on stream segregation. Pitch height differences between the A and the B sounds accounted for the results from both studies, suggesting that stream segregation is related to the salience of the perceptual difference between the sounds.
Trends in hearing | 2018
Andreu Paredes-Gallardo; Sara Miay Kim Madsen; Torsten Dau; Jeremy Marozeau
Sequential stream segregation by cochlear implant (CI) listeners was investigated using a temporal delay detection task composed of a sequence of regularly presented bursts of pulses on a single electrode (B) interleaved with an irregular sequence (A) presented on a different electrode. In half of the trials, a delay was added to the last burst of the regular B sequence, and the listeners were asked to detect this delay. As a jitter was added to the period between consecutive A bursts, time judgments between the A and B sequences provided an unreliable cue to perform the task. Thus, the segregation of the A and B sequences should improve performance. In Experiment 1, the electrode separation and the sequence duration were varied to clarify whether place cues help CI listeners to voluntarily segregate sounds and whether a two-stream percept needs time to build up. Results suggested that place cues can facilitate the segregation of sequential sounds if enough time is provided to build up a two-stream percept. In Experiment 2, the duration of the sequence was fixed, and only the electrode separation was varied to estimate the fission boundary. Most listeners were able to segregate the sounds for separations of three or more electrodes, and some listeners could segregate sounds coming from adjacent electrodes.
Journal of the Acoustical Society of America | 2014
Sara Miay Kim Madsen; Brian C. J. Moore
The weaker of two temporally overlapping complex tones can be easier to hear when the tones are asynchronous than when they are synchronous. This study explored how the use of fast and slow five-channel amplitude compression, as might be used in hearing aids, affected the ability to use onset and offset asynchronies to detect one (signal) complex tone when another (masking) complex tone was presented almost simultaneously. A 2:1 compression ratio was used with normal-hearing subjects, and individual compression ratios and gains recommended by the CAM2 hearing aid fitting method were used for hearing-impaired subjects. When the signal started before the masker, there was a benefit of compression for both normal-hearing and hearing-impaired subjects. When the signal finished after the masker, there was a benefit of fast compression for the normal-hearing subjects but no benefit for most of the hearing-impaired subjects, except when the offset asynchrony was relatively large (100 ms). The benefit of compression probably occurred because the compression improved the effective signal-to-masker ratio, hence reducing backward and forward masking. This apparently outweighed potential deleterious effects of distortions in envelope shape and the introduction of partially correlated envelopes of the signal and masker.
International Journal of Audiology | 2018
Sara Miay Kim Madsen; James M. Harte; Claus Elberling; Torsten Dau
Abstract Objective: The aims were to 1) establish which of the four algorithms for estimating residual noise level and signal-to-noise ratio (SNR) in auditory brainstem responses (ABRs) perform better in terms of post-average wave-V peak latency and amplitude errors and 2) determine whether SNR or noise floor is a better stop criterion where the outcome measure is peak latency or amplitude. Design: The performance of the algorithms was evaluated by numerical simulations using an ABR template combined with electroencephalographic (EEG) recordings obtained without sound stimulus. The suitability of a fixed SNR versus a fixed noise floor stop criterion was assessed when variations in the wave-V waveform shape reflecting inter-subject variation was introduced. Study sample: Over 100 hours of raw EEG noise was recorded from 17 adult subjects, under different conditions (e.g. sleep or movement). Results: ABR feature accuracy was similar for the four algorithms. However, it was shown that a fixed noise floor leads to higher ABR wave-V amplitude accuracy; conversely, a fixed SNR yields higher wave-V latency accuracy. Conclusion: Similar performance suggests the use of the less computationally complex algorithms. Different stop criteria are recommended if the ABR peak latency or the amplitude is the outcome measure of interest.
Hearing Research | 2018
Sara Miay Kim Madsen; Torsten Dau; Brian C. J. Moore
ABSTRACT The ability to segregate sounds from different sound sources is thought to depend on the perceptual salience of differences between the sounds, such as differences in frequency or fundamental frequency (F0). F0 discrimination of complex tones is better for tones with low harmonics than for tones that only contain high harmonics, suggesting greater pitch salience for the former. This leads to the expectation that the sequential stream segregation (streaming) of complex tones should be better for tones with low harmonics than for tones with only high harmonics. However, the results of previous studies are conflicting about whether this is the case. The goals of this study were to determine the effect of harmonic rank on streaming and to establish whether streaming is related to F0 discrimination. Thirteen young normal‐hearing participants were tested. Streaming was assessed for pure tones and complex tones containing harmonics with various ranks using sequences of ABA triplets, where A and B differed in frequency or in F0. The participants were asked to try to hear two streams and to indicate when they heard one and when they heard two streams. F0 discrimination was measured for the same tones that were used as A tones in the streaming experiment. Both streaming and F0 discrimination worsened significantly with increasing harmonic rank. There was a significant relationship between streaming and F0 discrimination, indicating that good F0 discrimination is associated with good streaming. This supports the idea that the extent of stream segregation depends on the salience of the perceptual difference between successive sounds. HighlightsSequential stream segregation and F0DLs were measured for pure tones and complex tones with varying harmonic rank.Segregation increased with decreasing harmonic ranks.The effect of harmonic rank was progressive and even small differences in harmonic rank led to differences in segregation.Correlations were found between stream segregation and F0DLs.This supports the idea that the extend of segregation depends on the salience of the perceptual difference between sounds.
Frontiers in Neuroscience | 2018
Andreu Paredes-Gallardo; Hamish Innes-Brown; Sara Miay Kim Madsen; Torsten Dau; Jeremy Marozeau
The role of the spatial separation between the stimulating electrodes (electrode separation) in sequential stream segregation was explored in cochlear implant (CI) listeners using a deviant detection task. Twelve CI listeners were instructed to attend to a series of target sounds in the presence of interleaved distractor sounds. A deviant was randomly introduced in the target stream either at the beginning, middle or end of each trial. The listeners were asked to detect sequences that contained a deviant and to report its location within the trial. The perceptual segregation of the streams should, therefore, improve deviant detection performance. The electrode range for the distractor sounds was varied, resulting in different amounts of overlap between the target and the distractor streams. For the largest electrode separation condition, event-related potentials (ERPs) were recorded under active and passive listening conditions. The listeners were asked to perform the behavioral task for the active listening condition and encouraged to watch a muted movie for the passive listening condition. Deviant detection performance improved with increasing electrode separation between the streams, suggesting that larger electrode differences facilitate the segregation of the streams. Deviant detection performance was best for deviants happening late in the sequence, indicating that a segregated percept builds up over time. The analysis of the ERP waveforms revealed that auditory selective attention modulates the ERP responses in CI listeners. Specifically, the responses to the target stream were, overall, larger in the active relative to the passive listening condition. Conversely, the ERP responses to the distractor stream were not affected by selective attention. However, no significant correlation was observed between the behavioral performance and the amount of attentional modulation. Overall, the findings from the present study suggest that CI listeners can use electrode separation to perceptually group sequential sounds. Moreover, selective attention can be deployed on the resulting auditory objects, as reflected by the attentional modulation of the ERPs at the group level.
Journal of the Acoustical Society of America | 2017
Sara Miay Kim Madsen; Andrew J. Oxenham
Recent studies disagree on whether musicians have an advantage over non-musicians in understanding speech in noise. This study tested the hypothesis that better fundamental-frequency (F0) discrimination enables musicians to make better use of F0 differences between competing talkers. Sentence intelligibility was measured in a background of noise or two competing talkers, where the target and background speech were natural or monotonized. Participants were also tested with the Vocabulary and Matrix Reasoning subtests of the Wechsler Abbreviated Scale of Intelligence. As expected, speech intelligibility improved with increasing F0 difference between the target and the two maskers for both natural and monotone speech. However, no significant intelligibility advantage in speech was observed for musicians over non-musicians in any condition. F0 discrimination was significantly better for musicians than for non-musicians. Scores in the IQ test did not differ significantly between groups. Overall, the results do...
Journal of the Acoustical Society of America | 2015
Sara Miay Kim Madsen; Michael A. Stone; Martin F. McKinney; Kelly Fitz; Brian C. J. Moore