Yingjiu Nie
James Madison University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Yingjiu Nie.
Frontiers in Neuroscience | 2014
Yingjiu Nie; Yang Zhang; Peggy B. Nelson
The current study measured neural responses to investigate auditory stream segregation of noise stimuli with or without clear spectral contrast. Sequences of alternating A and B noise bursts were presented to elicit stream segregation in normal-hearing listeners. The successive B bursts in each sequence maintained an equal amount of temporal separation with manipulations introduced on the last stimulus. The last B burst was either delayed for 50% of the sequences or not delayed for the other 50%. The A bursts were jittered in between every two adjacent B bursts. To study the effects of spectral separation on streaming, the A and B bursts were further manipulated by using either bandpass-filtered noises widely spaced in center frequency or broadband noises. Event-related potentials (ERPs) to the last B bursts were analyzed to compare the neural responses to the delay vs. no-delay trials in both passive and attentive listening conditions. In the passive listening condition, a trend for a possible late mismatch negativity (MMN) or late discriminative negativity (LDN) response was observed only when the A and B bursts were spectrally separate, suggesting that spectral separation in the A and B burst sequences could be conducive to stream segregation at the pre-attentive level. In the attentive condition, a P300 response was consistently elicited regardless of whether there was spectral separation between the A and B bursts, indicating the facilitative role of voluntary attention in stream segregation. The results suggest that reliable ERP measures can be used as indirect indicators for auditory stream segregation in conditions of weak spectral contrast. These findings have important implications for cochlear implant (CI) studies—as spectral information available through a CI device or simulation is substantially degraded, it may require more attention to achieve stream segregation.
Journal of the Acoustical Society of America | 2010
Yingjiu Nie; Peggy B. Nelson
We investigated the contribution of amplitude modulation (AM) rate and spectral separation to stream segregation of vocoder bandpass noises. Stimulus sequences were repeated pairs of A and B bursts, where bursts were white noise or vocoder bandpass noise carrying sinusoidal AM (100% modulation depth). Bursts differed either in the center frequency of the noise, or the AM rate, or both. Eight vocoder bands were used. The lowest four bands (1‐2‐3‐4) were combined into one bandpass noise (B bursts) and the higher three bands (3‐4‐5, 4‐5‐6, and 6‐7‐8) were combined to constitute the A bursts. Results show that stream segregation ability increases with greater spectral separation. Larger AM rate separations were associated with stronger segregation abilities, but not when A and B bursts were both white noise. Significant inter‐subject differences were noted. Results suggest that, while both spectral and AM rate separation separations could be cues for auditory stream segregation, stream segregation based on AM rate is more successful when combined with spectral separation. Correlations between segregation ability and understanding of vocoded speech will be discussed.
Journal of the Acoustical Society of America | 2009
Peggy B. Nelson; Elizabeth S. A. Crump; Yingjiu Nie; Michelle Hawkinson‐Lewis
Previous results have shown that listeners with sensorineural hearing loss (SNHL) obtain about half of the masking release of their normal‐hearing (NH) counterparts. When speech is amplified sufficiently, listeners with SNHL may score like NH listeners in quiet and in steady noise, yet may obtain only half of the expected release from gated noise. We hypothesize that some of that deficiency may occur because of the impaired listeners’ low speech sensation levels, which results in decreased usefulness of the speech signal in the noise dips. In the current study, NH listeners were tested for their recognition of IEEE sentences in quiet, in steady noise, and in gated noise with the speech presented at varying sensation levels. At low levels (10–15 dB SL), NH listeners scored nearly 100% correct in quiet. In steady noise (at −10 dB signal‐to‐noise ratio) scores for low‐level stimuli were also similar to those obtained at higher SLs. However, at low SLs in gated noise, NH listeners demonstrated less masking re...
Trends in hearing | 2018
Yingjiu Nie; John J. Galvin; Michael Morikawa; Victoria André; Harley J. Wheeler; Qian-Jie Fu
This study examined music and speech perception in normal-hearing children with some or no musical training. Thirty children (mean age = 11.3 years), 15 with and 15 without formal music training participated in the study. Music perception was measured using a melodic contour identification (MCI) task; stimuli were a piano sample or sung speech with a fixed timbre (same word for each note) or a mixed timbre (different words for each note). Speech perception was measured in quiet and in steady noise using a matrix-styled sentence recognition task; stimuli were naturally intonated speech or sung speech with a fixed pitch (same note for each word) or a mixed pitch (different notes for each word). Significant musician advantages were observed for MCI and speech in noise but not for speech in quiet. MCI performance was significantly poorer with the mixed timbre stimuli. Speech performance in noise was significantly poorer with the fixed or mixed pitch stimuli than with spoken speech. Across all subjects, age at testing and MCI performance were significantly correlated with speech performance in noise. MCI and speech performance in quiet was significantly poorer for children than for adults from a related study using the same stimuli and tasks; speech performance in noise was significantly poorer for young than for older children. Long-term music training appeared to benefit melodic pitch perception and speech understanding in noise in these pediatric listeners.
Cochlear Implants International | 2018
Douglas P. Sladen; Yingjiu Nie; Katelyn Berg
Objectives: The purpose of this study was to investigate speech recognition in noise and listening effort among a group of adults with cochlear implants (CIs). Two main research questions were addressed. First, what are the effects of omni versus directional microphone configuration on speech recognition and listening effort for noisy conditions? Second, what is the effect of unilateral versus bimodal or bilateral CI listening on speech recognition and listening effort in noisy conditions? Design: Sixteen adults (mean age 58 years) with CIs participated. Listening effort was measured using a dual-task paradigm and also using a self-reported rating of difficulty scale. In the dual-task measure, participants were asked to repeat monosyllabic words while at the same time press a button in response to a visual stimulus. Participants were tested in two baseline conditions (speech perception alone and visual task alone) and in the following experimental conditions: (1) quiet with an omnidirectional microphone, (2) noise with an omnidirectional microphone, (3) noise with a directional microphone, and (4) noise with a directional microphone and with a second sided CI or hearing aid. When present, the noise was fixed with a +5 dB signal-to-noise ratio. After each listening condition, the participants rated the degree of listening difficulty. Results: Changing the microphone from omni to directional mode significantly enhanced speech recognition in noise performance. There were no significant changes in speech recognition between the unilateral and bimodal/bilateral CI listening conditions. Listening effort, as measured by reaction time, increased significantly between the baseline and omnidirectional quiet listening condition though did not change significantly across the remaining listening conditions. Self-perceived listening effort revealed a greater effort for the noisy conditions, and reduced effort with the move from an omni to a directional microphone. Conclusions: Directional microphones significantly improve speech in noise recognition over omnidirectional microphones and allowed for decreased self-perceived listening effort. The dual task used in this study failed to show any differences in listening effort across the experimental conditions and may not be sensitive enough to detect changes in listening effort.
Journal of the Acoustical Society of America | 2015
Yingjiu Nie; Harley J. Wheeler; Alexandra B. Short; Caleb W. Harrington
The study was aimed to investigate, among three groups of listeners—normal-hearing, hearing-impaired, and cochlear implant users, the relative weight of temporal envelopes for speech intelligibility in each of the eight frequency regions ranging between 72 and 9200 Hz. Listeners were tested in quiet and in the presence of steady or amplitude modulated noise at two rates (4 and 16 Hz). An eight-band vocoder was implemented when testing the acoustic-hearing groups. Speech intelligibility of a given region/band was assessed by comparing scores in two conditions differing only by the presence or absence of the band of interest; the proportion of the derived score to the sum across the eight regions/bands was computed as the relative weight. Preliminary data showed the following: (1) in quiet, similar frequency-weighting pattern for all three groups with higher weight in the mid/mid-high frequency range; (2) in noise, for the normal-hearing group, different weighting patterns between steady noise and amplitude...
Journal of the Acoustical Society of America | 2012
Yingjiu Nie; Peggy B. Nelson; Evelyn Davies-Venn; Adam Svec
Widened auditory filters in hearing impaired (HI) listeners may force them to rely more on temporal envelope (TE) cues when listening to speech. We propose that reduced masking release in HI listeners may be partially due to the confusion of the TEs of the masker and target. The current study investigates HI listeners comprehension of low- or high-pass vocoded spondees in the presence of fluctuating and stationary background noise. The spectral relationship of the target and masker was systematically varied from greater to no spectral overlap; the TEs of the masker and target were varied in similarity along two aspects — amplitude-modulation rate and shape. Preliminary data have shown the TE confusion in some HI impaired listeners results in speech understanding scores that are poorer in the presence of fluctuating noise (at a rate of 4Hz) than when the stationary noise is present. On the other hand, another group of HI listeners has demonstrated masking release. The effect of TE confusion of speech-en...
Journal of the Acoustical Society of America | 2012
Peggy B. Nelson; Yingjiu Nie; Adam Svec; Tess K. Koerner; Bhagyashree Katare; Melanie J. Gregan
Listeners with sensorineural hearing loss (SNHL) report significant difficulties when listening to speech in the presence of background noise and are highly variable in their tolerance to such noise. In our studies of speech perception, audibility predicts understanding of speech in quiet for most young listeners with SNHL. In background noise, however, the speech recognition performance of some young listeners with SNHL deviates significantly from audibility predictions. We hypothesize that vulnerability to background noise may be related to listeners’ broader auditory filters, to a loss of discrimination ability for rapid spectral changes, or to a disruption of the speech temporal envelopes by the addition of noise. Measures of spectral resolution, spectral change detection, and envelope confusion will be presented for listeners with SNHL. Relationships between those estimates and speech recognition in noise will be described. Results may suggest a range of custom strategies for improving tolerance for ...
Journal of the Acoustical Society of America | 2011
Su‐Hyun Jin; Yingjiu Nie; Peggy B. Nelson
The purpose of the current study was to examine the effects of temporal and spectral interferences on sentence recognition for cochlear (CI) implant listeners. Nie and Nelson (2010) investigated vocoded speech perception in amplitude‐modulated band‐pass noises in order to assess young normal‐hearing (NH) listeners’ speech understanding through cochlear implant simulations. They reported that while the spectra of the noise bands affected vocoder speech perception, there was no significant effect of the noise AM rate on performance, indicating spectral but not temporal interference. As a follow‐up, young CI listeners with various devices and processing strategies participated in the current study. IEEE sentences and white noise were divided into 16 bands, and band‐pass noise maskers were formed from varying combinations of 4–6 adjacent bands. The spectra of the maskers relative to the spectra of speech were set to be one of the following: completely overlapping, partially overlapping or completely separate ...
Journal of the Acoustical Society of America | 2011
Evelyn Davies-Venn; Peggy B. Nelson; Yingjiu Nie; Adam Svec; Katare Bhagyashree
The effect of masking release is still the source of numerous active investigations. However, differences in findings are sometimes noted among studies, especially related to potential gate frequency effects. The present study investigated the effect of masking release on listeners using a variety of speech materials . The main goal of this study was to investigate the effect of speech material on measures of masking release for listeners with normal hearing and hearing loss. The test stimuli were IEEE sentences and modified spondee words. To eliminate confounds of audibility and duration, the test stimuli were equated for duration and audibility. Listeners were tested across a wide range of audibility and gate frequencies. Performance and masking release results will be presented for these speech stimuli. [Work supported by NIDCD 008306.]