Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Z. Ellen Peng is active.

Publication


Featured researches published by Z. Ellen Peng.


Journal of the Acoustical Society of America | 2018

Effects of non-individual head-related transfer functions and visual mismatch on speech recognition in virtual acoustic environments

Kristi M. Ward; Z. Ellen Peng; Tina M. Grieco-Calub

There is widespread research and clinical interest in quantifying how the acoustics of real-world environments, such as background noise and reverberation, impede a listener’s ability to recognize speech. Conventional methods used to quantify these effects include dichotic listening via headphones in sound-attenuated booths or loudspeakers in anechoic or low-reverberant environments, which lack the capability of manipulating room acoustics. Using a state-of-the-art Variable Room Acoustics System housed in a virtual sound room (ViSoR), this study aims to systematically assess the effects of non-individual head-related transfer functions (HRTFs) and mismatched visual perception on speech recognition in virtual acoustic environments. Young adults listened to and repeated sentences presented amidst a co-located two-talker speech competitor with reverberation times ranging from 0.4 to 1.25 s. Sentences were presented in three listening conditions: through a loudspeaker array in ViSoR with the participants’ own HRTFs (Condition 1); via headphones in a sound-attenuated booth with non-individual HRTFs (Condition 2); and using the same binaural reproduction as Condition 2 in ViSoR (Condition 3). Condition 3 serves as a control condition, allowing us to quantify the separate effects of non-individual HRTFs and visual mismatch on speech recognition. Discussion will address the validity and use of virtual acoustics in research and clinical settings.There is widespread research and clinical interest in quantifying how the acoustics of real-world environments, such as background noise and reverberation, impede a listener’s ability to recognize speech. Conventional methods used to quantify these effects include dichotic listening via headphones in sound-attenuated booths or loudspeakers in anechoic or low-reverberant environments, which lack the capability of manipulating room acoustics. Using a state-of-the-art Variable Room Acoustics System housed in a virtual sound room (ViSoR), this study aims to systematically assess the effects of non-individual head-related transfer functions (HRTFs) and mismatched visual perception on speech recognition in virtual acoustic environments. Young adults listened to and repeated sentences presented amidst a co-located two-talker speech competitor with reverberation times ranging from 0.4 to 1.25 s. Sentences were presented in three listening conditions: through a loudspeaker array in ViSoR with the participants’ own...


Journal of the Acoustical Society of America | 2018

Acoustical analysis of preschool classrooms

Tina M. Grieco-Calub; Z. Ellen Peng

The present study explores acoustical parameters, including unoccupied and occupied noise levels and reverberation time (RT), in typical preschool classrooms located in the northern suburbs of Chicago. The study was motivated by the following observations: (1) preschool classrooms are often established in buildings that were not initially constructed to be learning spaces; (2) poor classroom acoustics interfere with skills related to academic outcomes including speech perception, serial recall, literacy, and cognitive skills; and (3) younger children are at greater risk for impaired performance in acoustically complex environments. The study was designed to determine whether these preschool classrooms meet existing classroom acoustics standards (ANSI S12.60-2010). Measurements were made with microphones positioned in locations of the classrooms where children typically engage in activities during school season to reflect realistic building operation situations. Unoccupied and occupied noises were recorded over long durations, with intensity and spectral analysis of the recordings conducted off-line. RTs were measured and quantified at the same microphone positions. Preliminary results suggested that none of the preschool classrooms met the recommended unoccupied noise levels, while 3/11 classrooms were found to have longer than recommended RTs. Occupied noise in these preschool classrooms, and implications for learning, will be discussed.The present study explores acoustical parameters, including unoccupied and occupied noise levels and reverberation time (RT), in typical preschool classrooms located in the northern suburbs of Chicago. The study was motivated by the following observations: (1) preschool classrooms are often established in buildings that were not initially constructed to be learning spaces; (2) poor classroom acoustics interfere with skills related to academic outcomes including speech perception, serial recall, literacy, and cognitive skills; and (3) younger children are at greater risk for impaired performance in acoustically complex environments. The study was designed to determine whether these preschool classrooms meet existing classroom acoustics standards (ANSI S12.60-2010). Measurements were made with microphones positioned in locations of the classrooms where children typically engage in activities during school season to reflect realistic building operation situations. Unoccupied and occupied noises were recorded...


Journal of the Acoustical Society of America | 2018

Use of non-individualized head-related transfer functions to measure spatial release from masking in children with normal hearing

Z. Ellen Peng; Ruth White; Sara Misurelli; Keng Moua; Alan Kan; Ruth Y. Litovsky

Spatial hearing studies with children have typically been conducted using loudspeakers in laboratories. However, loudspeaker arrays are rare in clinics due to high cost and technical set-up requirements. The use of virtual auditory space (VAS) with non-individualized head-related transfer functions (HRTFs) can increase the feasibility of assessing spatial hearing abilities in clinical settings. A novel paradigm for measuring spatial release from masking (SRM) was developed using non-individualized HRTFs. This paradigm measures the minimum angular separation needed between target and masker to achieve a 20% increase in target speech intelligibility. First, the 50% speech reception threshold (SRT) was measured with target and masker co-located to one side. Then, the masker position was adaptively changed to achieve 70.7% intelligibility while maintaining the signal-to-noise ratio at the level of the co-located SRT. To verify the use of non-individualized HRTFs, normal-hearing children were tested (1) using a loudspeaker array and (2) in headphone-based VAS created using KEMAR HRTFs measured in the same setup as (1). Preliminary results showed that co-located SRTs and target-masker angle separation to achieve a 20% SRM were similar in loudspeaker array and in headphone-based VAS. This suggests that non-individualized HRTFs might be used in an SRM task for clinical testing.Spatial hearing studies with children have typically been conducted using loudspeakers in laboratories. However, loudspeaker arrays are rare in clinics due to high cost and technical set-up requirements. The use of virtual auditory space (VAS) with non-individualized head-related transfer functions (HRTFs) can increase the feasibility of assessing spatial hearing abilities in clinical settings. A novel paradigm for measuring spatial release from masking (SRM) was developed using non-individualized HRTFs. This paradigm measures the minimum angular separation needed between target and masker to achieve a 20% increase in target speech intelligibility. First, the 50% speech reception threshold (SRT) was measured with target and masker co-located to one side. Then, the masker position was adaptively changed to achieve 70.7% intelligibility while maintaining the signal-to-noise ratio at the level of the co-located SRT. To verify the use of non-individualized HRTFs, normal-hearing children were tested (1) using ...


Journal of the Acoustical Society of America | 2018

Restoring binaural and spatial hearing in cochlear implant users

Ruth Y. Litovsky; Alan Kan; Tanvi Thakkar; Sean R. Anderson; Z. Ellen Peng; Thibaud Leclère

Adults and children who receive bilateral cochlear implants (BiCIs) have the potential to benefit from the integration of inputs arriving at the brain from both ears. Several factors play a key role in determining if patients will demonstrate binaural sensitivity. We are exploring these factors using two experimental approaches. In the first approach, BiCI users receive pulsatile stimulation to specific pairs of electrodes using research processors that synchronize stimulation with fidelity. We vary stimulus parameters such as temporal fine structure and envelope cues, places of stimulation along the cochleae, and number of electrodes to find parameters that maximize sensitivity to interaural differences for each patient. We also investigate the role of the electrode-neuron interface which is affected by numerous factors including neural health. In a second stimulation approach, we use clinical speech processor to deliver binaural stimulation designed specifically for that patient based on their clinical MAP. In these studies, we are using both standard psychophysics and eye gaze paradigms to understand the underlying processing involved in binaural and spatial hearing. This combined approach is enabling us to design multi-channel multi-rate stimulation strategies aimed at restoring binaural sensitivity and preserving speech understanding.Adults and children who receive bilateral cochlear implants (BiCIs) have the potential to benefit from the integration of inputs arriving at the brain from both ears. Several factors play a key role in determining if patients will demonstrate binaural sensitivity. We are exploring these factors using two experimental approaches. In the first approach, BiCI users receive pulsatile stimulation to specific pairs of electrodes using research processors that synchronize stimulation with fidelity. We vary stimulus parameters such as temporal fine structure and envelope cues, places of stimulation along the cochleae, and number of electrodes to find parameters that maximize sensitivity to interaural differences for each patient. We also investigate the role of the electrode-neuron interface which is affected by numerous factors including neural health. In a second stimulation approach, we use clinical speech processor to deliver binaural stimulation designed specifically for that patient based on their clinical ...


Journal of the Acoustical Society of America | 2017

Investigating processing delay in interaural time difference discrimination by normal-hearing children

Z. Ellen Peng; Taylor N. Fields; Ruth Y. Litovsky

Recent work suggests that interaural time difference (ITD) thresholds are adult-like by 8-10 years of age in children with normal hearing (NH). If processing time is considered, however, we hypothesize that the ability to successfully extract ITDs is not fully mature in children. A novel paradigm was designed to simultaneously measure eye gaze with an eye tracker during an ITD discrimination task with mouse-click. Stimuli were 4 kHz transposed tones modulated at 128 Hz, tested with a 3-interval, 2-alternative forced choice task (left- or right-leading ITDs) with feedback. During each trial, gaze position on the computer screen was simultaneously recorded from stimulus onset to the time when a mouse click indicated either a left or right response. Processing times were extracted from the eye gaze data and compared with those from young NH adults. This presentation will focus on the developmental differences observed when threshold estimation is used versus when processing time is assessed via eye gaze meas...


Journal of the Acoustical Society of America | 2017

Influence of multiple moving noise distractors on spatial release from masking

Rhoddy Viveros; Z. Ellen Peng; Janina Fels

Previous studies on speech in noise are often limited to studying stationary sound sources in the virtual acoustic scene. Moving toward a more realistic acoustic environment, this study aims at expanding the knowledge of spatial release from masking (SRM) involving moving noises. Current mathematical models describe contributions of distractor asymmetry (on the same or different sides) and angular separation with the target in predicting SRM (e.g., Bronkhorst, 2000). In a previous study, we quantified SRM from single distractor traveling on a 90° trajectory. In this study, we add a second distractor to investigate how speech understanding is affected by distractors in movement in various trajectory angles, where only binaural cues are available. A speech-in-noise test with normal-hearing adults is performed using virtual binaural reproduction. Listeners are asked to identify spoken words by target in front (0° azimuth) under the presence of babble-noise distractors. Speech reception thresholds at 50% inte...


Journal of the Acoustical Society of America | 2016

Effect of room acoustics on speech perception by children with hearing loss

Z. Ellen Peng; Florian Pausch; Janina Fels

To study speech perception of children with hearing loss in virtual acoustic environments (VAE), a pair of research hearing aids has been previously integrated in a real-time dynamic binaural reproduction system. The auralization included simulations of room acoustics using individualized head-related and hearing aid-related transfer functions (HRTF and HARTF). In this study, a release from masking paradigm by Cameron and Dillon (2007) was adopted in German to investigate speech intelligibility by children fitted with hearing aids under realistic classroom acoustics. When immersed in VAE, each child was asked to repeat sentences spoken by a target talker always located at 0° azimuth, while two distractor talkers were continuously telling unfamiliar Grimm stories. The speech reception threshold (SRT) at 50% intelligibility was measured adaptively by changing the target talker speech level. A total of eight conditions were tested with each child by changing spatial cues (target-distractor collocated versus ...


Archive | 2018

Assessment of individual head-related transfer function and undirected head movements of normal listeners in a moving speech-in-noise task using virtual acoustics

Rhoddy Viveros Munoz; Janina Fels; Z. Ellen Peng


Journal of the Acoustical Society of America | 2018

Spatial release from masking in reverberation for children and adults with normal hearing

Z. Ellen Peng; Florian Pausch; Janina Fels


Journal of the Acoustical Society of America | 2018

Investigating individual susceptibility to the detrimental effects of background noise and reverberation in simulated complex acoustic environments

Kristi M. Ward; Z. Ellen Peng; Maryam Landi; Andrew Burleson; Pamela E. Souza; Tina M. Grieco-Calub

Collaboration


Dive into the Z. Ellen Peng's collaboration.

Top Co-Authors

Avatar

Janina Fels

RWTH Aachen University

View shared research outputs
Top Co-Authors

Avatar

Ruth Y. Litovsky

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alan Kan

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Keng Moua

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ruth White

University of Wisconsin-Madison

View shared research outputs
Researchain Logo
Decentralizing Knowledge