Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tim Jürgens is active.

Publication


Featured researches published by Tim Jürgens.


Journal of the Acoustical Society of America | 2012

A frequency-selective feedback model of auditory efferent suppression and its implications for the recognition of speech in noise

Nicholas R. Clark; Guy J. Brown; Tim Jürgens; Ray Meddis

The potential contribution of the peripheral auditory efferent system to our understanding of speech in a background of competing noise was studied using a computer model of the auditory periphery and assessed using an automatic speech recognition system. A previous study had shown that a fixed efferent attenuation applied to all channels of a multi-channel model could improve the recognition of connected digit triplets in noise [G. J. Brown, R. T. Ferry, and R. Meddis, J. Acoust. Soc. Am. 127, 943-954 (2010)]. In the current study an anatomically justified feedback loop was used to automatically regulate separate attenuation values for each auditory channel. This arrangement resulted in a further enhancement of speech recognition over fixed-attenuation conditions. Comparisons between multi-talker babble and pink noise interference conditions suggest that the benefit originates from the models ability to modify the amount of suppression in each channel separately according to the spectral shape of the interfering sounds.


PLOS ONE | 2012

No effect of a single session of transcranial direct current stimulation on experimentally induced pain in patients with chronic low back pain--an exploratory study.

Kerstin Luedtke; Arne May; Tim Jürgens

Transcranial direct current stimulation (tDCS) has been shown to modulate cortical excitability. A small number of studies suggested that tDCS modulates the response to experimental pain paradigms. No trials have been conducted to evaluate the response of patients already suffering from pain, to an additional experimental pain before and after tDCS. The present study investigated the effect of a single session of anodal, cathodal and sham stimulation (15 mins/1 mA) over the primary motor cortex on the perceived intensity of repeated noxious thermal and electrical stimuli and on elements of quantitative sensory testing (thermal pain and perception thresholds) applied to the right hand in 15 patients with chronic low back pain. The study was conducted in a double-blind sham-controlled and cross-over design. No significant alterations of pain ratings were found. Modalities of quantitative sensory testing remained equally unchanged. It is therefore hypothesized that a single 15 mins session of tDCS at 1 mA may not be sufficient to alter the perception of experimental pain and in patients with chronic pain. Further studies applying repetitive tDCS to patients with chronic pain are required to fully answer the question whether experimental pain perception may be influenced by tDCS over the motor cortex.


Journal of the Acoustical Society of America | 2010

Human phoneme recognition depending on speech-intrinsic variability.

Bernd T. Meyer; Tim Jürgens; Thorsten Wesker; Thomas Brand; Birger Kollmeier

The influence of different sources of speech-intrinsic variation (speaking rate, effort, style and dialect or accent) on human speech perception was investigated. In listening experiments with 16 listeners, confusions of consonant-vowel-consonant (CVC) and vowel-consonant-vowel (VCV) sounds in speech-weighted noise were analyzed. Experiments were based on the OLLO logatome speech database, which was designed for a man-machine comparison. It contains utterances spoken by 50 speakers from five dialect/accent regions and covers several intrinsic variations. By comparing results depending on intrinsic and extrinsic variations (i.e., different levels of masking noise), the degradation induced by variabilities can be expressed in terms of the SNR. The spectral level distance between the respective speech segment and the long-term spectrum of the masking noise was found to be a good predictor for recognition rates, while phoneme confusions were influenced by the distance to spectrally close phonemes. An analysis based on transmitted information of articulatory features showed that voicing and manner of articulation are comparatively robust cues in the presence of intrinsic variations, whereas the coding of place is more degraded. The database and detailed results have been made available for comparisons between human speech recognition (HSR) and automatic speech recognizers (ASR).


International Journal of Audiology | 2015

Influence of noise type on speech reception thresholds across four languages measured with matrix sentence tests

Sabine Hochmuth; Birger Kollmeier; Thomas Brand; Tim Jürgens

Objective: To compare speech reception thresholds (SRTs) in noise using matrix sentence tests in four languages: German, Spanish, Russian, Polish. Design: The four tests were composed of equivalent five-word sentences and were all designed and optimized using the same principles. Six stationary speech-shaped noises and three non-stationary noises were used as maskers. Study sample: Forty native listeners with normal hearing: 10 for each language. Results: SRTs were about 3 dB higher for the German and Spanish tests than for the Russian and Polish tests when stationary noise was used that matched the long-term frequency spectrum of the respective speech test materials. This general SRT difference was also observed for the other stationary noises. The within-test variability across noise conditions differed between languages. About 56% of the observed variance was predicted by the speech intelligibility index. The observed SRT benefit in fluctuating noise was similar for all tests, with a slightly smaller benefit for the Spanish test. Conclusions: Of the stationary noises employed, noise with the same spectrum as the speech yielded the best masking. SRT differences across languages and noises could be attributed in part to spectral differences. These findings provide the feasibility and limits of comparing audiological results across languages.


PLOS ONE | 2014

An improved model of heat-induced hyperalgesia--repetitive phasic heat pain causing primary hyperalgesia to heat and secondary hyperalgesia to pinprick and light touch.

Tim Jürgens; Alexander Sawatzki; Florian Henrich; Walter Magerl; Arne May

This study tested a modified experimental model of heat-induced hyperalgesia, which improves the efficacy to induce primary and secondary hyperalgesia and the efficacy-to-safety ratio reducing the risk of tissue damage seen in other heat pain models. Quantitative sensory testing was done in eighteen healthy volunteers before and after repetitive heat pain stimuli (60 stimuli of 48°C for 6 s) to assess the impact of repetitive heat on somatosensory function in conditioned skin (primary hyperalgesia area) and in adjacent skin (secondary hyperalgesia area) as compared to an unconditioned mirror image control site. Additionally, areas of flare and secondary hyperalgesia were mapped, and time course of hyperalgesia determined. After repetitive heat pain conditioning we found significant primary hyperalgesia to heat, and primary and secondary hyperalgesia to pinprick and to light touch (dynamic mechanical allodynia). Acetaminophen (800 mg) reduced pain to heat or pinpricks only marginally by 11% and 8%, respectively (n.s.), and had no effect on heat hyperalgesia. In contrast, the areas of flare (−31%) and in particular of secondary hyperalgesia (−59%) as well as the magnitude of hyperalgesia (−59%) were significantly reduced (all p<0.001). Thus, repetitive heat pain induces significant peripheral sensitization (primary hyperalgesia to heat) and central sensitization (punctate hyperalgesia and dynamic mechanical allodynia). These findings are relevant to further studies using this model of experimental heat pain as it combines pronounced peripheral and central sensitization, which makes a convenient model for combined pharmacological testing of analgesia and anti-hyperalgesia mechanisms related to thermal and mechanical input.


International Journal of Audiology | 2016

Exploration of a physiologically-inspired hearing-aid algorithm using a computer model mimicking impaired hearing

Tim Jürgens; Nicholas R. Clark; Wendy Lecluyse; Ray Meddis

Abstract Objective: To use a computer model of impaired hearing to explore the effects of a physiologically-inspired hearing-aid algorithm on a range of psychoacoustic measures. Design: A computer model of a hypothetical impaired listener’s hearing was constructed by adjusting parameters of a computer model of normal hearing. Absolute thresholds, estimates of compression, and frequency selectivity (summarized to a hearing profile) were assessed using this model with and without pre-processing the stimuli by a hearing-aid algorithm. The influence of different settings of the algorithm on the impaired profile was investigated. To validate the model predictions, the effect of the algorithm on hearing profiles of human impaired listeners was measured. Study sample: A computer model simulating impaired hearing (total absence of basilar membrane compression) was used, and three hearing-impaired listeners participated. Results: The hearing profiles of the model and the listeners showed substantial changes when the test stimuli were pre-processed by the hearing-aid algorithm. These changes consisted of lower absolute thresholds, steeper temporal masking curves, and sharper psychophysical tuning curves. Conclusion: The hearing-aid algorithm affected the impaired hearing profile of the model to approximate a normal hearing profile. Qualitatively similar results were found with the impaired listeners’ hearing profiles.


International Journal of Audiology | 2014

Hearing dummies: individualized computer models of hearing impairment.

Manasa R. Panda; Wendy Lecluyse; Christine M. Tan; Tim Jürgens; Ray Meddis

Abstract Objective: Our aim was to explore the usage of individualized computer models to simulate hearing loss based on detailed psychophysical assessment and to offer hypothetical diagnoses of the underlying pathology. Design: Individualized computer models of normal and impaired hearing were constructed and evaluated using the psychophysical data obtained from human listeners. Computer models of impaired hearing were generated to reflect the hypothesized underlying pathology (e.g. dead regions, outer hair cell dysfunction, or reductions in endocochlear potential). These models were evaluated in terms of their ability to replicate the original patient data. Study sample: Auditory profiles were measured for two normal and five hearing-impaired listeners using a battery of three psychophysical tests (absolute thresholds, frequency selectivity, and compression). Results: The individualized computer models were found to match the data. Useful fits to the impaired profiles could be obtained by changing only a single parameter in the model of normal hearing. Sometimes, however, it was necessary to include an additional dead region. Conclusion: The creation of individualized computer models of hearing loss can be used to simulate auditory profiles of impaired listeners and suggest hypotheses concerning the underlying peripheral pathology.


Trends in hearing | 2015

Spatial Release From Masking in Simulated Cochlear Implant Users With and Without Access to Low-Frequency Acoustic Hearing.

Ben Williges; Mathias Dietz; Volker Hohmann; Tim Jürgens

For normal-hearing listeners, speech intelligibility improves if speech and noise are spatially separated. While this spatial release from masking has already been quantified in normal-hearing listeners in many studies, it is less clear how spatial release from masking changes in cochlear implant listeners with and without access to low-frequency acoustic hearing. Spatial release from masking depends on differences in access to speech cues due to hearing status and hearing device. To investigate the influence of these factors on speech intelligibility, the present study measured speech reception thresholds in spatially separated speech and noise for 10 different listener types. A vocoder was used to simulate cochlear implant processing and low-frequency filtering was used to simulate residual low-frequency hearing. These forms of processing were combined to simulate cochlear implant listening, listening based on low-frequency residual hearing, and combinations thereof. Simulated cochlear implant users with additional low-frequency acoustic hearing showed better speech intelligibility in noise than simulated cochlear implant users without acoustic hearing and had access to more spatial speech cues (e.g., higher binaural squelch). Cochlear implant listener types showed higher spatial release from masking with bilateral access to low-frequency acoustic hearing than without. A binaural speech intelligibility model with normal binaural processing showed overall good agreement with measured speech reception thresholds, spatial release from masking, and spatial speech cues. This indicates that differences in speech cues available to listener types are sufficient to explain the changes of spatial release from masking across these simulated listener types.


International Journal of Audiology | 2015

Talker- and language-specific effects on speech intelligibility in noise assessed with bilingual talkers: Which language is more robust against noise and reverberation?

Sabine Hochmuth; Tim Jürgens; Thomas Brand; Birger Kollmeier

Objective: Investigate talker- and language-specific aspects of speech intelligibility in noise and reverberation using highly comparable matrix sentence tests across languages. Design: Matrix sentences spoken by German/Russian and German/Spanish bilingual talkers were recorded. These sentences were used to measure speech reception thresholds (SRTs) with native listeners in the respective languages in different listening conditions (stationary and fluctuating noise, multi-talker babble, reverberated speech-in-noise condition). Study sample: Four German/Russian and four German/Spanish bilingual talkers; 20 native German-speaking, 10 native Russian-speaking, and 10 native Spanish-speaking listeners. Results: Across-talker SRT differences of up to 6 dB were found for both groups of bilinguals. SRTs of German/Russian bilingual talkers were the same in both languages. SRTs of German/Spanish bilingual talkers were higher when they talked in Spanish than when they talked in German. The benefit from listening in the gaps was similar across all languages. The detrimental effect of reverberation was larger for Spanish than for German and Russian. Conclusions: Within the limitations set by the number and slight accentedness of talkers and other possible confounding factors, talker- and test-condition-dependent differences were isolated from the language effect: Russian and German exhibited similar intelligibility in noise and reverberation, whereas Spanish was more impaired in these situations.


Trends in hearing | 2016

Forward-Masked Frequency Selectivity Improvements in Simulated and Actual Cochlear Implant Users Using a Preprocessing Algorithm

Florian Langner; Tim Jürgens

Frequency selectivity can be quantified using masking paradigms, such as psychophysical tuning curves (PTCs). Normal-hearing (NH) listeners show sharp PTCs that are level- and frequency-dependent, whereas frequency selectivity is strongly reduced in cochlear implant (CI) users. This study aims at (a) assessing individual shapes of PTCs in CI users, (b) comparing these shapes to those of simulated CI listeners (NH listeners hearing through a CI simulation), and (c) increasing the sharpness of PTCs using a biologically inspired dynamic compression algorithm, BioAid, which has been shown to sharpen the PTC shape in hearing-impaired listeners. A three-alternative-forced-choice forward-masking technique was used to assess PTCs in 8 CI users (with their own speech processor) and 11 NH listeners (with and without listening through a vocoder to simulate electric hearing). CI users showed flat PTCs with large interindividual variability in shape, whereas simulated CI listeners had PTCs of the same average flatness, but more homogeneous shapes across listeners. The algorithm BioAid was used to process the stimuli before entering the CI users’ speech processor or the vocoder simulation. This algorithm was able to partially restore frequency selectivity in both groups, particularly in seven out of eight CI users, meaning significantly sharper PTCs than in the unprocessed condition. The results indicate that algorithms can improve the large-scale sharpness of frequency selectivity in some CI users. This finding may be useful for the design of sound coding strategies particularly for situations in which high frequency selectivity is desired, such as for music perception.

Collaboration


Dive into the Tim Jürgens's collaboration.

Top Co-Authors

Avatar

Thomas Brand

University of Oldenburg

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Guy J. Brown

University of Sheffield

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge