Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jennifer J. Lentz is active.

Publication


Featured researches published by Jennifer J. Lentz.


Frontiers in Systems Neuroscience | 2013

Auditory and cognitive factors underlying individual differences in aided speech-understanding among older adults

Larry E. Humes; Gary R. Kidd; Jennifer J. Lentz

This study was designed to address individual differences in aided speech understanding among a relatively large group of older adults. The group of older adults consisted of 98 adults (50 female and 48 male) ranging in age from 60 to 86 (mean = 69.2). Hearing loss was typical for this age group and about 90% had not worn hearing aids. All subjects completed a battery of tests, including cognitive (6 measures), psychophysical (17 measures), and speech-understanding (9 measures), as well as the Speech, Spatial, and Qualities of Hearing (SSQ) self-report scale. Most of the speech-understanding measures made use of competing speech and the non-speech psychophysical measures were designed to tap phenomena thought to be relevant for the perception of speech in competing speech (e.g., stream segregation, modulation-detection interference). All measures of speech understanding were administered with spectral shaping applied to the speech stimuli to fully restore audibility through at least 4000 Hz. The measures used were demonstrated to be reliable in older adults and, when compared to a reference group of 28 young normal-hearing adults, age-group differences were observed on many of the measures. Principal-components factor analysis was applied successfully to reduce the number of independent and dependent (speech understanding) measures for a multiple-regression analysis. Doing so yielded one global cognitive-processing factor and five non-speech psychoacoustic factors (hearing loss, dichotic signal detection, multi-burst masking, stream segregation, and modulation detection) as potential predictors. To this set of six potential predictor variables were added subject age, Environmental Sound Identification (ESI), and performance on the text-recognition-threshold (TRT) task (a visual analog of interrupted speech recognition). These variables were used to successfully predict one global aided speech-understanding factor, accounting for about 60% of the variance.


Journal of the Acoustical Society of America | 2002

Decision strategies of hearing-impaired listeners in spectral shape discrimination

Jennifer J. Lentz; Marjorie R. Leek

The ability to discriminate between sounds with different spectral shapes was evaluated for normal-hearing and hearing-impaired listeners. Listeners detected a 920-Hz tone added in phase to a single component of a standard consisting of the sum of five tones spaced equally on a logarithmic frequency scale ranging from 200 to 4200 Hz. An overall level randomization of 10 dB was either present or absent. In one subset of conditions, the no-perturbation conditions, the standard stimulus was the sum of equal-amplitude tones. In the perturbation conditions, the amplitudes of the components within a stimulus were randomly altered on every presentation. For both perturbation and no-perturbation conditions, thresholds for the detection of the 920-Hz tone were measured to compare sensitivity to changes in spectral shape between normal-hearing and hearing-impaired listeners. To assess whether hearing-impaired listeners relied on different regions of the spectrum to discriminate between sounds, spectral weights were estimated from the perturbed standards by correlating the listeners responses with the level differences per component across two intervals of a two-alternative forced-choice task. Results showed that hearing-impaired and normal-hearing listeners had similar sensitivity to changes in spectral shape. On average, across-frequency correlation functions also were similar for both groups of listeners, suggesting that as long as all components are audible and well separated in frequency, hearing-impaired listeners can use information across frequency as well as normal-hearing listeners. Analysis of the individual data revealed, however, that normal-hearing listeners may be better able to adopt optimal weighting schemes. This conclusion is only tentative, as differences in internal noise may need to be considered to interpret the results obtained from weighting studies between normal-hearing and hearing-impaired listeners.


Journal of the Acoustical Society of America | 2003

Spectral shape discrimination by hearing-impaired and normal-hearing listeners

Jennifer J. Lentz; Marjorie R. Leek

The ability to discriminate between sounds with different spectral shapes was evaluated for normal-hearing and hearing-impaired listeners. Listeners discriminated between a standard stimulus and a signal stimulus in which half of the standard components were decreased in level and half were increased in level. In one condition, the standard stimulus was the sum of six equal-amplitude tones (equal-SPL), and in another the standard stimulus was the sum of six tones at equal sensation levels re: audiometric thresholds for individual subjects (equal-SL). Spectral weights were estimated in conditions where the amplitudes of the individual tones were perturbed slightly on every presentation. Sensitivity was similar in all conditions for normal-hearing and hearing-impaired listeners. The presence of perturbation and equal-SL components increased thresholds for both groups, but only small differences in weighting strategy were measured between the groups depending on whether the equal-SPL or equal-SL condition was tested. The average data suggest that normal-hearing listeners may rely more on the central components of the spectrum whereas hearing-impaired listeners may have been more likely to use the edges. However, individual weighting functions were quite variable, especially for the HI listeners, perhaps reflecting difficulty in processing changes in spectral shape due to hearing loss. Differences in weighting strategy without changes in sensitivity suggest that factors other than spectral weights, such as internal noise or difficulty encoding a reference stimulus, also may dominate performance.


Attention Perception & Psychophysics | 2009

Independence and separability in the perception of complex nonspeech sounds

Noah H. Silbert; James T. Townsend; Jennifer J. Lentz

All sounds are multidimensional, yet the relationships among auditory dimensions have been studied only infrequently. General recognition theory (GRT; Ashby & Townsend, 1986) is a multidimensional generalization of signal detection theory and, as such, provides powerful tools well suited to the study of the relationships among perceptual dimensions. However, previous uses of GRT have been limited in serious ways. We present methods designed to overcome these limitations, and we use these methods to apply GRT to investigations of the relationships among auditory perceptual dimensions that previous work suggests are independent (frequency, duration) or not (fundamental frequency [ f0], spectral shape). Results from Experiment 1 confirm that frequency and duration do not interact decisionally, and they extend this finding with evidence of perceptual independence. Results from Experiment 2 show that f0 and spectral shape tend to interact perceptually, decisionally, or both, and that perceptual interactions occur within, but not between, stimuli (i.e., the interactions suggest correlated noise across processing channels corresponding to perceptually separable dimensions). The results are discussed in relation to lower level sensory modeling and higher level cognitive and linguistic issues.


Journal of the Acoustical Society of America | 2006

Spectral-peak selection in spectral-shape discrimination by normal-hearing and hearing-impaired listeners

Jennifer J. Lentz

Spectral-shape discrimination thresholds were measured in the presence and absence of noise to determine whether normal-hearing and hearing-impaired listeners rely primarily on spectral peaks in the excitation pattern when discriminating between stimuli with different spectral shapes. Standard stimuli were the sum of 2, 4, 6, 8, 10, 20, or 30 equal-amplitude tones with frequencies fixed between 200 and 4000 Hz. Signal stimuli were generated by increasing and decreasing the levels of every other standard component. The function relating the spectral-shape discrimination threshold to the number of components (N) showed an initial decrease in threshold with increasing N and then an increase in threshold when the number of components reached 10 and 6, for normal-hearing and hearing-impaired listeners, respectively. The presence of a 50-dB SPL/Hz noise led to a 1.7 dB increase in threshold for normal-hearing listeners and a 3.5 dB increase for hearing-impaired listeners. Multichannel modeling and the relatively small influence of noise suggest that both normal-hearing and hearing-impaired listeners rely on the peaks in the excitation pattern for spectral-shape discrimination. The greater influence of noise in the data from hearing-impaired listeners is attributed to a poorer representation of spectral peaks.


Journal of the Acoustical Society of America | 2006

Phase effects in masking by harmonic complexes in birds.

Amanda M. Lauer; Robert J. Dooling; Marjorie R. Leek; Jennifer J. Lentz

Masking by harmonic complexes depends on the frequency content of the masker and its phase spectrum. Harmonic complexes created with negative Schroeder phases (component phases decreasing with increasing frequency) produce more masking than those with positive Schroeder phases (increasing phase) in humans, but not in birds. The masking differences in humans have been attributed to interactions between the masker phase spectrum and the phase characteristic of the basilar membrane. In birds, the similarity in masking by positive and negative Schroeder maskers, and reduced masking by cosine-phase maskers (constant phase), suggests a phase characteristic that does not change much along the basilar papilla. To evaluate this possibility, the rate of phase change across masker bandwidth was varied by systematically altering the Schroeder algorithm. Humans and three species of birds detected tones added in phase to a single component of a harmonic complex. As observed in earlier studies, the minimum amount of masking in humans occurred for positive phase gradients. However, minimum masking in birds occurred for a shallow negative phase gradient. These results suggest a cochlear delay in birds that is reduced compared to that found in humans, probably related to the shorter avian basilar epithelia.


Journal of the Acoustical Society of America | 1997

Sensitivity to changes in overall level and spectral shape: An evaluation of a channel model

Jennifer J. Lentz; Virginia M. Richards

Two experiments involving level and spectral shape discrimination which test an optimal channel model developed by Durlach et al. [J. Acoust. Soc Am. 80, 63-72 (1986)] are described. The model specifies how the auditory system compares and/or combines intensity information in different frequency channels. In the first experiment, psychometric functions were obtained for the discrimination of changes in level and discrimination of changes in spectral shape for an eight-tone complex sound. A variety of different base spectral shapes were tested. In some conditions, level randomization was introduced to reduce the reliability of across-interval changes in level. Increasing the amount of level variation degraded performance for the level discrimination task but had no effect on the shape discrimination task. In all conditions, sensitivity to changes in spectral shape was superior to sensitivity to changes in level. Consequently, two models of central noise are evaluated in an attempt to explain these results; one in which central noise acts prior to the formation of the likelihood ratio and one in which central noise degrades the likelihood ratio. The former model is more successful in accounting for the data. In a second experiment, the detectability of a level increment to one component of a multitone complex was measured. The frequency content of the complex was varied by systematically removing six components from a 23-component complex. Thresholds were measured for increments at three different signal frequencies. A common trend in the data was that when there was a spectral gap directly above the signal frequency, thresholds were lowest. This result differs from the predictions of a simple channel model, and contrasts with results presented by Green and Berg [Q. J. Exp. Psychol. 43A, 449-458 (1991)].


Journal of the Acoustical Society of America | 2007

Variation in spectral-shape discrimination weighting functions at different stimulus levels and signal strengths.

Jennifer J. Lentz

This study evaluated whether weights for spectral-shape discrimination depend on overall stimulus level and signal strength (the degree of spectral-shape change between two stimuli). Five listeners discriminated between standard stimuli that were the sum of six equal-amplitude tones and signal stimuli created by decreasing the amplitudes of three low-frequency components and increasing the amplitudes of three high-frequency components. Weighting functions were influenced by stimulus level in that the relative contribution of the low-frequency (decremented) components to the high-frequency (incremented) components decreased with increasing stimulus level. Although individual variability was present, a follow-up experiment suggested that the level dependence was due to greater reliance on high-frequency components rather than incremented components. Excitation-pattern analyses indicated that the level dependence is primarily, but not solely, driven by cochlear factors. In general, different signal strengths had no effect on the weighting functions (when normalized), but two of the five listeners showed variability in the shape of the weighting function across signal strengths. Results suggest that the effects of stimulus level on weighting functions and individual variability in the shapes of the weighting functions should be considered when comparing weighting functions across conditions and groups that might require different stimulus levels and signal strengths.


Frontiers in Human Neuroscience | 2014

A new perspective on binaural integration using response time methodology: super capacity revealed in conditions of binaural masking release

Jennifer J. Lentz; Yuan He; James T. Townsend

This study applied reaction-time based methods to assess the workload capacity of binaural integration by comparing reaction time (RT) distributions for monaural and binaural tone-in-noise detection tasks. In the diotic contexts, an identical tone + noise stimulus was presented to each ear. In the dichotic contexts, an identical noise was presented to each ear, but the tone was presented to one of the ears 180° out of phase with respect to the other ear. Accuracy-based measurements have demonstrated a much lower signal detection threshold for the dichotic vs. the diotic conditions, but accuracy-based techniques do not allow for assessment of system dynamics or resource allocation across time. Further, RTs allow comparisons between these conditions at the same signal-to-noise ratio. Here, we apply a reaction-time based capacity coefficient, which provides an index of workload efficiency and quantifies the resource allocations for single ear vs. two ear presentations. We demonstrate that the release from masking generated by the addition of an identical stimulus to one ear is limited-to-unlimited capacity (efficiency typically less than 1), consistent with less gain than would be expected by probability summation. However, the dichotic presentation leads to a significant increase in workload capacity (increased efficiency)—most specifically at lower signal-to-noise ratios. These experimental results provide further evidence that configural processing plays a critical role in binaural masking release, and that these mechanisms may operate more strongly when the signal stimulus is difficult to detect, albeit still with nearly 100% accuracy.


Journal of the Acoustical Society of America | 2010

Effect of fast-acting compression on modulation detection interference for normal hearing and hearing impaired listeners.

Yi Shen; Jennifer J. Lentz

To determine the effects of hearing loss and fast-acting compression on auditory grouping based on across-frequency modulation, modulation detection interference (MDI) was measured in listeners with normal hearing and hearing loss. MDI, the increase in the amplitude-modulation detection threshold of a target presented with an interferer distant in frequency, was measured using a 500-Hz target and a 2140-Hz interferer, both modulated with narrow-band noises of the same bandwidth. The two modulated tones were presented at equal loudness levels to listeners with normal hearing and hearing loss in the absence (Exp. 1) and in the presence (Exp. 2) of fast-acting compression applied to the interferer. Modulation detection thresholds increased with increasing modulation depth of the interferer by similar amounts for the two groups of listeners, suggesting that across-frequency grouping based on amplitude modulation is not altered by hearing impairment. Compression provided an additional increase in thresholds for both groups, indicating that compression algorithms might alter across-frequency grouping cues. Partial support for an idea that compressions effect of sharpening the onsets after each envelope valley is provided by a third experiment which found somewhat greater interference produced by square-wave modulation than sine-wave modulation at larger interferer modulation depths.

Collaboration


Dive into the Jennifer J. Lentz's collaboration.

Top Co-Authors

Avatar

Marjorie R. Leek

Walter Reed Army Medical Center

View shared research outputs
Top Co-Authors

Avatar

Yi Shen

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yuan He

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar

James T. Townsend

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kimberly G. Skinner

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Susie Valentine

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar

Michelle R. Molis

University of Texas at Austin

View shared research outputs
Researchain Logo
Decentralizing Knowledge