Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Karen A. Doherty is active.

Publication


Featured researches published by Karen A. Doherty.


Ear and Hearing | 2006

Determination of the potential benefit of time-frequency gain manipulation

Michael C. Anzalone; Lauren Calandruccio; Karen A. Doherty; Laurel H. Carney

Objective: The purpose of this study was to determine the maximum benefit provided by a time-frequency gain-manipulation algorithm for noise-reduction (NR) based on an ideal detector of speech energy. The amount of detected energy necessary to show benefit using this type of NR algorithm was examined, as well as the necessary speed and frequency resolution of the gain manipulation. Design: NR was performed using time-frequency gain manipulation, wherein the gains of individual frequency bands depended on the absence or presence of speech energy within each band. Three different experiments were performed: (1) NR using ideal detectors, (2) NR with nonideal detectors, and (3) NR with ideal detectors and different processing speeds and frequency resolutions. All experiments were performed using the Hearing-in-Noise test (HINT). A total of 6 listeners with normal hearing and 14 listeners with hearing loss were tested. Results: HINT thresholds improved for all listeners with NR based on the ideal detectors used in Experiment I. The nonideal detectors of Experiment II required detection of at least 90% of the speech energy before an improvement was seen in HINT thresholds. The results of Experiment III demonstrated that relatively high temporal resolution (<100 msec) was required by the NR algorithm to improve HINT thresholds. Conclusions: The results indicated that a single-microphone NR system based on time-frequency gain manipulation improved the HINT thresholds of listeners. However, to obtain benefit in speech intelligibility, the detectors used in such a strategy were required to detect an unrealistically high percentage of the speech energy and to perform the gain manipulations on a fast temporal basis.


Ear and Hearing | 2013

Age-related changes in listening effort for various types of masker noises.

Jamie L. Desjardins; Karen A. Doherty

Objective: The purpose of the present study was to evaluate the relationship between cognitive function, listening effort, and speech recognition for a group of younger and older adults with normal hearing and a group of older adults with hearing impairment in various types of background maskers. The authors hypothesize that, as the masker condition becomes more difficult listening effort will increase, but the increase will be greater for older participants than for younger participants. Design: A dual-task paradigm was used to objectively evaluate listening effort. The primary task required participants to repeat sentences presented in three different background-masker conditions: (1) two-talker (TT), (2) six-talker, and (3) speech-shaped noise (SSN). The secondary task was a Digital Visual Pursuit Rotor Tracking test, for which participants were instructed to use a computer mouse to track a moving target around an ellipse that was displayed on a computer screen. Each of the two tasks was separately and concurrently presented at a fixed overall speech-recognition performance level of 76% correct. In addition, participants subjectively rated how easy it was to listen to the sentences in each masker condition on a scale from 0 (i.e., very difficult) to 100 (i.e., very easy). Last, participants completed a battery of cognitive tests that measured working memory (Reading Span Test), processing speed (Digit Symbol Substitution Test), and selective attention (Stroop Test) ability. Results: Results revealed that participants’ working memory and processing speed abilities were significantly related to their speech-recognition performance in noise in all three background-masker conditions. Participants rated the TT condition to be the most difficult listening condition and the SSN condition to be the easiest listening condition. Both groups of older participants expended significantly more listening effort than younger participants did in the SSN and TT masker conditions. For each group of participants, there were no significant differences in listening effort measured across the masker conditions, with the exception of the younger participants, who expended more effort listening in the six-talker masker condition compared with the SSN condition. Participants’ listening effort expended on the TT and SSN masker conditions was significantly correlated with their working memory and processing speed performance. Conclusions: Findings from the present study indicate that older adults require more cognitive resources than younger adults to understand speech in background noise.


Ear and Hearing | 2014

The effect of hearing aid noise reduction on listening effort in hearing-impaired adults.

Jamie L. Desjardins; Karen A. Doherty

Objectives: The purpose of the present study was to evaluate the effect of a noise-reduction (NR) algorithm on the listening effort hearing-impaired participants expend on a speech in noise task. Design: Twelve hearing-impaired listeners fitted with behind-the-ear hearing aids with a fast-acting modulation–based NR algorithm participated in this study. A dual-task paradigm was used to measure listening effort with and without the NR enabled in the hearing aid. The primary task was a sentence-in-noise task presented at fixed overall speech performance levels of 76% (moderate listening condition) and 50% (difficult listening condition) correct performance, and the secondary task was a visual-tracking test. Participants also completed measures of working memory (Reading Span test), and processing speed (Digit Symbol Substitution Test) ability. Results: Participants’ speech recognition in noise scores did not significantly change with the NR algorithm activated in the hearing aid in either listening condition. The NR algorithm significantly decreased listening effort, but only in the more difficult listening condition. Last, there was a tendency for participants with faster processing speeds to expend less listening effort with the NR algorithm when listening to speech in background noise in the difficult listening condition. Conclusions: The NR algorithm reduced the listening effort adults with hearing loss must expend to understand speech in noise.


Journal of the Acoustical Society of America | 1996

Spectral weights for overall level discrimination in listeners with sensorineural hearing loss

Karen A. Doherty; Robert A. Lutfi

A conditional-on-a-single-stimulus (COSS) analysis procedure [B. G. Berg, J. Acoust. Soc. Am. 86, 1743-1746 (1989)] was used to measure the weight or relative reliance that normal-hearing and hearing-impaired listeners give to different frequencies in the discrimination of the overall level of a multitone complex. On each trial, two multitone-tone complexes comprised of six octave frequencies from 250 to 8000 Hz were presented to subjects. The levels of the frequencies for each complex were randomly varied. The listeners task was to identify the complex with the higher overall intensity level. Normal-hearing listeners used a variety of listening strategies to perform the task, showing no general preference to weight one component over another. Hearing-impaired listeners, however, showed a general tendency to give greatest weight to the spectral information in the region of their hearing loss. Thirteen of the 14 hearing-impaired listeners, all of whom had a high-frequency sensorineural hearing loss, weighted one or more of the high-frequence components in the complex the greatest.


Journal of the Acoustical Society of America | 1996

Use of a correlational method to estimate a listener’s weighting function for speech

Karen A. Doherty; Christopher W. Turner

The purpose of this study was to determine if it is feasible to use the correlational method (Lutfi, 1995; Richards and Zhu, 1994) to estimate how listeners use or weight the information contained within various frequency bands of speech. Three naturally spoken vowel-consonant-vowel (VCV) syllables (/aba/, /aga/, and /ada/) were presented monaurally to listeners. Each of the VCV waveforms were filtered into three separate frequency bands (i.e., low, mid, and high). Each band was then independently and randomly degraded at various signal-to-noise (S/N) levels (-7, -5, -3, -1, or +1). On each trial, listeners were asked to identify the VCV that was presented to them. For each trial, the S/N level of each frequency band, the stimulus that was presented, and the listeners responses were all recorded and stored in a file. From this trial-by-trial data, a point biserial correlation was computed between the listeners response (correct or incorrect identification) and the degradation within each frequency band. The stronger the correlation, the greater influence that given frequency band had on the listeners performance on the task. From these relations it was shown that it is possible to obtain a listeners weighting function for speech. Results showed that although most listeners weighted the mid-frequency band the greatest, several of the listeners used different weighting strategies to perform the task. Several methodological issues are discussed in regard to improving the future application of the correlational method to speech.


Journal of the Acoustical Society of America | 1998

Frequency-weighting functions for broadband speech as estimated by a correlational method

Christopher W. Turner; Bom Jun Kwon; Chiemi Tanaka; Jennifer Knapp; Jodi L. Hubbartt; Karen A. Doherty

The relative contributions of various regions of the frequency spectrum to speech recognition were assessed with a correlational method [K. A. Doherty and C. W. Turner, J. Acoust. Soc. Am. 100, 3769-3773 (1996)]. The speech materials employed were the 258-item set of the Nonsense Syllable Test. The speech was filtered into four frequency bands and a random level of noise was added to each band on each trial. A point biserial correlation was computed between the signal-to-noise ratio in each band on the trials and the listeners responses, and these correlations were then taken as estimates of the relative weights for each frequency band. When the four bands were presented separately, the correlations for each band were approximately equal; however, when the four bands were presented in combination, the correlations were quite different from one another, implying that in the broadband case listeners relied much more on some bands than on others. It is hypothesized that these differences reflect the way in which listeners combine and attend to speech information across various frequency regions. The frequency-weighting functions as determined by this method were highly similar across all subjects, suggesting that normal-hearing listeners use similar frequency-weighting strategies in recognizing speech.


Journal of the Acoustical Society of America | 1999

Level discrimination of single tones in a multitone complex by normal-hearing and hearing-impaired listeners

Karen A. Doherty; Robert A. Lutfi

A conditional-on-a-single-stimulus (COSS) analysis procedure [B. G. Berg, J. Acoust. Soc. Am. 86, 1743-1746 (1989)] was used to estimate how well normal-hearing and hearing-impaired listeners selectively attend to individual spectral components of a broadband signal in a level discrimination task. On each trial, two multitone complexes consisting of six octave frequencies from 250 to 8000 Hz were presented to listeners. The levels of the individual tones were chosen independently and at random on each presentation. The target tone was selected, within a block of trials, as the 250-, 1000-, or 4000-Hz component. On each trial, listeners were asked to indicate which of the two complex sounds contained the higher level target. As a group, normal-hearing listeners exhibited greater selectivity than hearing-impaired listeners to the 250-Hz target, while hearing-impaired listeners showed greater selectivity than normal-hearing listeners to the 4000-Hz target, which is in the region of their hearing loss. Both groups of listeners displayed large variability in their ability to selectively weight the 1000-Hz target. Trial-by-trial analysis showed a decrease in weighting efficiency with increasing frequency for normal-hearing listeners, but a relatively constant weighting efficiency across frequency for hearing-impaired listeners. Interestingly, hearing-impaired listeners selectively weighted the 4000-Hz target, which was in the region of their hearing loss, more efficiently than did the normal-hearing listeners.


Frontiers in Psychology | 2015

The benefit of amplification on auditory working memory function in middle-aged and young-older hearing impaired adults

Karen A. Doherty; Jamie L. Desjardins

Untreated hearing loss can interfere with an individual’s cognitive abilities and intellectual function. Specifically, hearing loss has been shown to negatively impact working memory function, which is important for speech understanding, especially in difficult or noisy listening conditions. The purpose of the present study was to assess the effect of hearing aid use on auditory working memory function in middle-aged and young-older adults with mild to moderate sensorineural hearing loss. Participants completed two objective measures of auditory working memory in aided and unaided listening conditions. An aged matched control group followed the same experimental protocol except they were not fit with hearing aids. All participants’ aided scores on the auditory working memory tests were significantly improved while wearing hearing aids. Thus, hearing aids worn during the early stages of an age-related hearing loss can improve a person’s performance on auditory working memory tests.


Jaro-journal of The Association for Research in Otolaryngology | 2015

Cues for Diotic and Dichotic Detection of a 500-Hz Tone in Noise Vary with Hearing Loss

Junwen Mao; Kelly Jo Koch; Karen A. Doherty; Laurel H. Carney

Hearing in noise is a challenge for all listeners, especially for those with hearing loss. This study compares cues used for detection of a low-frequency tone in noise by older listeners with and without hearing loss to those of younger listeners with normal hearing. Performance varies significantly across different reproducible, or “frozen,” masker waveforms. Analysis of these waveforms allows identification of the cues that are used for detection. This study included diotic (N0S0) and dichotic (N0Sπ) detection of a 500-Hz tone, with either narrowband or wideband masker waveforms. Both diotic and dichotic detection patterns (hit and false alarm rates) across the ensembles of noise maskers were predicted by envelope-slope cues, and diotic results were also predicted by energy cues. The relative importance of energy and envelope cues for diotic detection was explored with a roving-level paradigm that made energy cues unreliable. Most older listeners with normal hearing or mild hearing loss depended on envelope-related temporal cues, even for this low-frequency target. As hearing threshold at 500 Hz increased, the cues for diotic detection transitioned from envelope to energy cues. Diotic detection patterns for young listeners with normal hearing are best predicted by a model that combines temporal- and energy-related cues; in contrast, combining cues did not improve predictions for older listeners with or without hearing loss. Dichotic detection results for all groups of listeners were best predicted by interaural envelope cues, which significantly outperformed the classic cues based on interaural time and level differences or their optimal combination.


International Journal of Audiology | 2015

The NTID speech recognition test: NSRT ®

Joseph H. Bochner; Wayne M. Garrison; Karen A. Doherty

Abstract Objective: The purpose of this study was to collect and analyse data necessary for expansion of the NSRT item pool and to evaluate the NSRT adaptive testing software. Design: Participants were administered pure-tone and speech recognition tests including W-22 and QuickSIN, as well as a set of 323 new NSRT items and NSRT adaptive tests in quiet and background noise. Performance on the adaptive tests was compared to pure-tone thresholds and performance on other speech recognition measures. The 323 new items were subjected to Rasch scaling analysis. Study sample: Seventy adults with mild to moderately severe hearing loss participated in this study. Their mean age was 62.4 years (sd = 20.8). Results: The 323 new NSRT items fit very well with the original item bank, enabling the item pool to be more than doubled in size. Data indicate high reliability coefficients for the NSRT and moderate correlations with pure-tone thresholds (PTA and HFPTA) and other speech recognition measures (W-22, QuickSIN, and SRT). Conclusion: The adaptive NSRT is an efficient and effective measure of speech recognition, providing valid and reliable information concerning respondents’ speech perception abilities.

Collaboration


Dive into the Karen A. Doherty's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Robert A. Lutfi

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lu Feng Shi

Long Island University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Junwen Mao

University of Rochester

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge