Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Elizabeth Stangl is active.

Publication


Featured researches published by Elizabeth Stangl.


Ear and Hearing | 2014

Measuring listening effort: driving simulator versus simple dual-task paradigm

Yu-Hsiang Wu; Nazan Aksan; Matthew Rizzo; Elizabeth Stangl; Xuyang Zhang; Ruth A. Bentler

Objectives: The dual-task paradigm has been widely used to measure listening effort. The primary objectives of the study were to (1) investigate the effect of hearing aid amplification and a hearing aid directional technology on listening effort measured by a complicated, more real world dual-task paradigm and (2) compare the results obtained with this paradigm to a simpler laboratory-style dual-task paradigm. Design: The listening effort of adults with hearing impairment was measured using two dual-task paradigms, wherein participants performed a speech recognition task simultaneously with either a driving task in a simulator or a visual reaction-time task in a sound-treated booth. The speech materials and road noises for the speech recognition task were recorded in a van traveling on the highway in three hearing aid conditions: unaided, aided with omnidirectional processing (OMNI), and aided with directional processing (DIR). The change in the driving task or the visual reaction-time task performance across the conditions quantified the change in listening effort. Results: Compared to the driving-only condition, driving performance declined significantly with the addition of the speech recognition task. Although the speech recognition score was higher in the OMNI and DIR conditions than in the unaided condition, driving performance was similar across these three conditions, suggesting that listening effort was not affected by amplification and directional processing. Results from the simple dual-task paradigm showed a similar trend: hearing aid technologies improved speech recognition performance, but did not affect performance in the visual reaction-time task (i.e., reduce listening effort). The correlation between listening effort measured using the driving paradigm and the visual reaction-time task paradigm was significant. The finding showing that our older (56 to 85 years old) participants’ better speech recognition performance did not result in reduced listening effort was not consistent with literature that evaluated younger (approximately 20 years old), normal hearing adults. Because of this, a follow-up study was conducted. In the follow-up study, the visual reaction-time dual-task experiment using the same speech materials and road noises was repeated on younger adults with normal hearing. Contrary to findings with older participants, the results indicated that the directional technology significantly improved performance in both speech recognition and visual reaction-time tasks. Conclusions: Adding a speech listening task to driving undermined driving performance. Hearing aid technologies significantly improved speech recognition while driving, but did not significantly reduce listening effort. Listening effort measured by dual-task experiments using a simulated real-world driving task and a conventional laboratory-style task was generally consistent. For a given listening environment, the benefit of hearing aid technologies on listening effort measured from younger adults with normal hearing may not be fully translated to older listeners with hearing impairment.


International Journal of Audiology | 2013

The equivalence of acceptable noise level (ANL) with English, Mandarin, and non-semantic speech: A study across the U.S. and Taiwan

Hsu-Chueh Ho; Yu-Hsiang Wu; Shih-Hsuan Hsiao; Elizabeth Stangl; Emily J. Lentz; Ruth A. Bentler

Abstract Objective: Acceptable noise level (ANL) determines the maximum noise level that a listener is willing to accept while listening to speech. The objective of this study was to determine the equivalence of ANL measured using different speech stimuli for native speakers who lived in the U.S. and Taiwan. Design: ANLs were measured using English, Mandarin, and the international speech test signal (ISTS) at each site. The same babble noise was used across speech stimuli. The ANLs were considered equivalent if the difference was unlikely to be greater than 3 dB. Study sample: Thirty adults with normal hearing were recruited at each site. Results: For each site, the equivalence test suggested that the native-language and foreign-language ANLs were equivalent. Between the two sites, ANLs measured using the listener’s native language were also equivalent. Although the ISTS ANL obtained within each site was equivalent to, and highly correlated to, the native-language ANL, the data were unable to confirm the equivalence of the ISTS ANLs obtained from the two sites. Conclusions: The results suggested the possibility of directly comparing ANL measures carried out in different countries using different languages. However, it remains unclear if the ISTS can serve as an international ANL stimulus.


Ear and Hearing | 2016

Psychometric Functions of Dual-Task Paradigms for Measuring Listening Effort.

Yu-Hsiang Wu; Elizabeth Stangl; Xuyang Zhang; Joanna Perkins; Emily Eilers

Objectives: The purpose of the study was to characterize the psychometric functions that describe task performance in dual-task listening effort measures as a function of signal to noise ratio (SNR). Design: Younger adults with normal hearing (YNH, n = 24; experiment 1) and older adults with hearing impairment (n = 24; experiment 2) were recruited. Dual-task paradigms wherein the participants performed a primary speech recognition task simultaneously with a secondary task were conducted at a wide range of SNRs. Two different secondary tasks were used: an easy task (i.e., a simple visual reaction-time task) and a hard task (i.e., the incongruent Stroop test). The reaction time (RT) quantified the performance of the secondary task. Results: For both participant groups and for both easy and hard secondary tasks, the curves that described the RT as a function of SNR were peak shaped. The RT increased as SNR changed from favorable to intermediate SNRs, and then decreased as SNRs moved from intermediate to unfavorable SNRs. The RT reached its peak (longest time) at the SNRs at which the participants could understand 30 to 50% of the speech. In experiments 1 and 2, the dual-task trials that had the same SNR were conducted in one block. To determine if the peak shape of the RT curves was specific to the blocked SNR presentation order used in these experiments, YNH participants were recruited (n = 25; experiment 3) and dual-task measures, wherein the SNR was varied from trial to trial (i.e., nonblocked), were conducted. The results indicated that, similar to the first two experiments, the RT curves had a peak shape. Conclusions: Secondary task performance was poorer at the intermediate SNRs than at the favorable and unfavorable SNRs. This pattern was observed for both YNH and older adults with hearing impairment participants and was not affected by either task type (easy or hard secondary task) or SNR presentation order (blocked or nonblocked). The shorter RT at the unfavorable SNRs (speech intelligibility < 30%) possibly reflects that the participants experienced cognitive overload and/or disengaged themselves from the listening task. The implication of using the dual-task paradigm as a listening effort measure is discussed.


Ear and Hearing | 2013

The effect of hearing aid signal-processing schemes on acceptable noise levels: perception and prediction.

Yu-Hsiang Wu; Elizabeth Stangl

Objectives: The acceptable noise level (ANL) test determines the maximum noise level that an individual is willing to accept while listening to speech. The first objective of the present study was to systematically investigate the effect of wide dynamic range compression processing (WDRC), and its combined effect with digital noise reduction (DNR) and directional processing (DIR), on ANL. Because ANL represents the lowest signal-to-noise ratio (SNR) that a listener is willing to accept, the second objective was to examine whether the hearing aid output SNR could predict aided ANL across different combinations of hearing aid signal-processing schemes. Design: Twenty-five adults with sensorineural hearing loss participated in the study. ANL was measured monaurally in two unaided and seven aided conditions, in which the status of the hearing aid processing schemes (enabled or disabled) and the location of noise (front or rear) were manipulated. The hearing aid output SNR was measured for each listener in each condition using a phase-inversion technique. The aided ANL was predicted by unaided ANL and hearing aid output SNR, under the assumption that the lowest acceptable SNR at the listener’s eardrum is a constant across different ANL test conditions. Results: Study results revealed that, on average, WDRC increased (worsened) ANL by 1.5 dB, while DNR and DIR decreased (improved) ANL by 1.1 and 2.8 dB, respectively. Because the effects of WDRC and DNR on ANL were opposite in direction but similar in magnitude, the ANL of linear/DNR-off was not significantly different from that of WDRC/DNR-on. The results further indicated that the pattern of ANL change across different aided conditions was consistent with the pattern of hearing aid output SNR change created by processing schemes. Conclusions: Compared with linear processing, WDRC creates a noisier sound image and makes listeners less willing to accept noise. However, this negative effect on noise acceptance can be offset by DNR, regardless of microphone mode. The hearing aid output SNR derived using the phase-inversion technique can predict aided ANL across different combinations of signal-processing schemes. These results suggest a close relationship between aided ANL, signal-processing scheme, and hearing aid output SNR.


Journal of The American Academy of Audiology | 2014

The effect of audiovisual and binaural listening on the acceptable noise level (ANL): establishing an ANL conceptual model.

Yu-Hsiang Wu; Elizabeth Stangl; Carol Pang; Xuyang Zhang

BACKGROUND Little is known regarding the acoustic features of a stimulus used by listeners to determine the acceptable noise level (ANL). Features suggested by previous research include speech intelligibility (noise is unacceptable when it degrades speech intelligibility to a certain degree; the intelligibility hypothesis) and loudness (noise is unacceptable when the speech-to-noise loudness ratio is poorer than a certain level; the loudness hypothesis). PURPOSE The purpose of the study was to investigate if speech intelligibility or loudness is the criterion feature that determines ANL. To achieve this, test conditions were chosen so that the intelligibility and loudness hypotheses would predict different results. In Experiment 1, the effect of audiovisual (AV) and binaural listening on ANL was investigated; in Experiment 2, the effect of interaural correlation (ρ) on ANL was examined. RESEARCH DESIGN A single-blinded, repeated-measures design was used. STUDY SAMPLE Thirty-two and twenty-five younger adults with normal hearing participated in Experiments 1 and 2, respectively. DATA COLLECTION AND ANALYSIS In Experiment 1, both ANL and speech recognition performance were measured using the AV version of the Connected Speech Test (CST) in three conditions: AV-binaural, auditory only (AO)-binaural, and AO-monaural. Lipreading skill was assessed using the Utley lipreading test. In Experiment 2, ANL and speech recognition performance were measured using the Hearing in Noise Test (HINT) in three binaural conditions, wherein the interaural correlation of noise was varied: ρ = 1 (N(o)S(o) [a listening condition wherein both speech and noise signals are identical across two ears]), -1 (NπS(o) [a listening condition wherein speech signals are identical across two ears whereas the noise signals of two ears are 180 degrees out of phase]), and 0 (N(u)S(o) [a listening condition wherein speech signals are identical across two ears whereas noise signals are uncorrelated across ears]). The results were compared to the predictions made based on the intelligibility and loudness hypotheses. RESULTS The results of the AV and AO conditions appeared to support the intelligibility hypothesis due to the significant correlation between visual benefit in ANL (AV re: AO ANL) and (1) visual benefit in CST performance (AV re: AO CST) and (2) lipreading skill. The results of the N(o)S(o), NπS(o), and N(u)S(o) conditions negated the intelligibility hypothesis because binaural processing benefit (NπS(o) re: N(o)S(o), and N(u)S(o) re: N(o)S(o)) in ANL was not correlated to that in HINT performance. Instead, the results somewhat supported the loudness hypothesis because the pattern of ANL results across the three conditions (N(o)S(o) ≈ NπS(o) ≈ N(u)S(o) ANL) was more consistent with what was predicted by the loudness hypothesis (N(o)S(o) ≈ NπS(o) < N(u)S(o) ANL) than by the intelligibility hypothesis (NπS(o) < N(u)S(o) < N(o)S(o) ANL). The results of the binaural and monaural conditions supported neither hypothesis because (1) binaural benefit (binaural re: monaural) in ANL was not correlated to that in speech recognition performance, and (2) the pattern of ANL results across conditions (binaural < monaural ANL) was not consistent with the prediction made based on previous binaural loudness summation research (binaural ≥ monaural ANL). CONCLUSIONS The study suggests that listeners may use multiple acoustic features to make ANL judgments. The binaural/monaural results showing that neither hypothesis was supported further indicate that factors other than speech intelligibility and loudness, such as psychological factors, may affect ANL. The weightings of different acoustic features in ANL judgments may vary widely across individuals and listening conditions.


International Journal of Audiology | 2013

Hearing-aid users’ voices: A factor that could affect directional benefit

Yu-Hsiang Wu; Elizabeth Stangl; Ruth A. Bentler

Abstract Objective: Backward-facing directional processing (Back-DIR) is an algorithm that employs an anti-cardioid directivity pattern to enhance speech arriving from behind the listener. An experiment that was originally designed to evaluate Back-DIR, together with its follow-up experiment, are reported to illustrate how hearing-aid users’ voices could affect directional benefit. Design: Speech recognition performance was measured in a speech-180°/noise-0° configuration, with aids programmed to Back-DIR enabled or omnidirectional processing. In the original experiment, the conventional hearing-in-noise test (HINT) was used, wherein listeners repeated heard sentences. In the follow-up experiment, a modified HINT was used, wherein a carrier phrase was presented before each sentence. Study sample: Fifteen adults with sensorineural hearing loss participated in both experiments. Results: Significant Back-DIR benefit (relative to omnidirectional processing) was observed in the follow-up experiment, while not in the original experiment. Conclusions: In the original experiment, hearing aids were affected by listeners’ voices such that Back-DIR was not always activated when the target speech was presented. In the follow-up experiment, listeners’ voice effects were eliminated by the carrier phrase activating Back-DIR before the sentences were presented. The results suggest that the effect of hearing-aid technologies is highly dependent on the characteristics of listening conditions.


Journal of The American Academy of Audiology | 2017

Is the Device-Oriented Subjective Outcome (DOSO) Independent of Personality?

Yu-Hsiang Wu; Kelsey Dumanch; Elizabeth Stangl; Christi W. Miller; Kelly L. Tremblay; Ruth A. Bentler

Background: Self‐report questionnaires are a frequently used method of evaluating hearing aid outcomes. Studies have shown that personality can account for 5–20% of the variance in response to self‐report measures. As a result, these influences can impact results and limit their generalizability when the purpose of the study is to examine the technological merit of hearing aids. To reduce personality influences on self‐report outcome data, the Device‐Oriented Subjective Outcome (DOSO) was developed. The DOSO is meant to demonstrate outcomes of the amplification device relatively independent of the individuals personality. Still, it is unknown if the DOSO achieves its original goal. Purpose: The purpose of this study was to examine the relationship between personality and the DOSO. The relationship between personality and several widely used hearing‐related questionnaires was also examined. Research Design: This is a nonexperimental study using a correlational design. Study Sample: A total of 119 adult hearing aid wearers participated in the study. Data Collection and Analysis: The NEO Five‐Factor Inventory was used to measure five personality traits (Neuroticism, Extraversion, Openness, Agreeableness, and Conscientiousness). The initial (unaided) hearing disablement, residual (aided) hearing disablement, and hearing aid benefit and satisfaction was measured using the DOSO, Hearing Handicap Inventory for the Elderly/Adult, Abbreviated Profile of Hearing Aid Benefit, and Satisfaction with Amplification in Daily Life. The relationship between personality and each questionnaire was examined using a correlation analysis. Results: All of the DOSO subscales were found to be significantly correlated to personality, regardless of whether age and better‐ear hearing thresholds were controlled. Individuals who reported poorer hearing aid outcomes tended to have higher Neuroticism scores, while those who scored higher in Extraversion, Openness, and Agreeableness were more likely to report better outcomes. Across DOSO subscales, the maximum variance explained by personality traits ranged from 6% to 11%. Consistent with the literature, ∼3–18% of the variance of other hearing‐related questionnaires was attributable to personality. Conclusions: The degree to which personality affects the DOSO is similar to other hearing‐related questionnaires. Although the variance accounted for by personality is not large, researchers and clinicians should not assume that the results of the DOSO are independent of personality.


Journal of The American Academy of Audiology | 2015

Construct Validity of the Ecological Momentary Assessment in Audiology Research.

Yu-Hsiang Wu; Elizabeth Stangl; Xuyang Zhang; Ruth A. Bentler


Journal of The American Academy of Audiology | 2013

The effect of hearing aid technologies on listening in an automobile

Yu-Hsiang Wu; Elizabeth Stangl; Ruth A. Bentler; Rachel W. Stanziola


Ear and Hearing | 2017

Characteristics of Real-World Signal to Noise Ratios and Speech Listening Situations of Older Adults With Mild to Moderate Hearing Loss

Yu-Hsiang Wu; Elizabeth Stangl; Octav Chipara; Syed Shabih Hasan; Anne Welhaven; Jacob Oleson

Collaboration


Dive into the Elizabeth Stangl's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Matthew Rizzo

University of Nebraska Medical Center

View shared research outputs
Researchain Logo
Decentralizing Knowledge