Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Nandini Iyer is active.

Publication


Featured researches published by Nandini Iyer.


Journal of the Acoustical Society of America | 2012

Better-ear glimpsing efficiency with symmetrically-placed interfering talkers

Douglas S. Brungart; Nandini Iyer

In listening tasks where a target speech signal is spatially separated from a masking voice, listeners can often gain a substantial advantage in performance by attending to the ear with the better signal-to-noise ratio (SNR). However, this better-ear strategy becomes much more complicated when a target talker located in front of the listener is masked by interfering talkers positioned at symmetric locations to the left and right of the target. When this happens, there are no long-term SNR advantages at either ear and the only binaural SNR advantages available are the result of complicated better-ear glimpses that vary as a function of frequency and rapidly switch back and forth between the two ears according to the natural fluctuations in the relative levels of the two masking voices. In this study, a signal processing technique was used to take the better-ear glimpses that would ordinarily be randomly distributed across the two ears in a binaural speech signal and move them all into the same ear. This resulted in a monaural signal that contained all the information available to an ideal listener using an optimal binaural glimpsing strategy. Speech intelligibility was measured with these optimized monaural stimuli and compared to performance with unprocessed binaural speech stimuli. Performance was similar in these two conditions, suggesting that listeners with normal hearing are able to efficiently extract information from better-ear glimpses that fluctuate rapidly across frequency and across the two ears.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2005

EVALUATION OF BONE-CONDUCTION HEADSETS FOR USE IN MULTITALKER COMMUNICATION ENVIRONMENTS

Bruce N. Walker; Raymond M. Stanley; Nandini Iyer; Brian D. Simpson; Douglas S. Brungart

Standard audio headphones are useful in many applications, but they cover the ears of the listener and thus may impair the perception of ambient sounds. Bone-conduction headphones offer a possible alternative, but traditionally their use has been limited to monaural applications due to the high propagation speed of sound in the human skull. Here we show that stereo bone-conduction headsets can be used to provide a limited amount of interaural isolation in a dichotic speech perception task. The results suggest that reliable spatial separation is possible with bone-conduction headsets, but that they probably cannot be used to lateralize signals to extreme left or right apparent locations


Journal of the Acoustical Society of America | 2006

Monaural speech segregation using synthetic speech signals

Douglas S. Brungart; Nandini Iyer; Brian D. Simpson

When listening to natural speech, listeners are fairly adept at using cues such as pitch, vocal tract length, prosody, and level differences to extract a target speech signal from an interfering speech masker. However, little is known about the cues that listeners might use to segregate synthetic speech signals that retain the intelligibility characteristics of speech but lack many of the features that listeners normally use to segregate competing talkers. In this experiment, intelligibility was measured in a diotic listening task that required the segregation of two simultaneously presented synthetic sentences. Three types of synthetic signals were created: (1) sine-wave speech (SWS); (2) modulated noise-band speech (MNB); and (3) modulated sine-band speech (MSB). The listeners performed worse for all three types of synthetic signals than they did with natural speech signals, particularly at low signal-to-noise ratio (SNR) values. Of the three synthetic signals, the results indicate that SWS signals preserve more of the voice characteristics used for speech segregation than MNB and MSB signals. These findings have implications for cochlear implant users, who rely on signals very similar to MNB speech and thus are likely to have difficulty understanding speech in cocktail-party listening environments.


Journal of the Acoustical Society of America | 2007

Effects of periodic masker interruption on the intelligibility of interrupted speech

Nandini Iyer; Douglas S. Brungart; Brian D. Simpson

When listeners hear a target signal in the presence of competing sounds, they are quite good at extracting information at instances when the local signal-to-noise ratio of the target is most favorable. Previous research suggests that listeners can easily understand a periodically interrupted target when it is interleaved with noise. It is not clear if this ability extends to the case where an interrupted target is alternated with a speech masker rather than noise. This study examined speech intelligibility in the presence of noise or speech maskers, which were either continuous or interrupted at one of six rates between 4 and 128 Hz. Results indicated that with noise maskers, listeners performed significantly better with interrupted, rather than continuous maskers. With speech maskers, however, performance was better in continuous, rather than interrupted masker conditions. Presumably the listeners used continuity as a cue to distinguish the continuous masker from the interrupted target. Intelligibility in the interrupted masker condition was improved by introducing a pitch difference between the target and speech masker. These results highlight the role that target-masker differences in continuity and pitch play in the segregation of competing speech signals.


Journal of the Acoustical Society of America | 2013

Interactions between listening effort and masker type on the energetic and informational masking of speech stimuli

Douglas S. Brungart; Nandini Iyer; Eric R. Thompson; Brian D. Simpson; Sandra Gordon-Salant; Jaclyn Schurman; Chelsea Vogel; Kenneth W. Grant

In most cases, normal-hearing listeners perform better when a target speech signal is masked by a single irrelevant speech masker than they do with a noise masker at an equivalent signal-to-noise ratio (SNR). However, this relative advantage for segregating target speech from a speech masker versus a noise masker may not come without a cost: segregating speech from speech may require the allocation of additional cognitive resources that are not required to segregate speech from noise. The cognitive resources required to extract a target speech signal from different backgrounds can be assessed by varying the complexity of the listening task. Examples include: (1) contrasting the difference between the detection of a speech signal and the correct identification of its contents; (2) contrasting the difference between single-task diotic and dual-task dichotic listening tasks; and (3) contrasting the difference between standard listening tasks and one-back tasks where listeners must keep one response in memory during each stimulus presentation. By examining performance with different kinds of maskers in tasks with different levels of complexity, we can start to determine the impact that the informational and energetic components of masking have on the listening effort required to understand speech in complex environments.


Journal of the Acoustical Society of America | 2015

Release from informational masking in a monaural competing-speech task with vocoded copies of the maskers presented contralaterally

Joshua G. Bernstein; Nandini Iyer; Douglas S. Brungart

Single-sided deafness prevents access to the binaural cues that help normal-hearing listeners extract target speech from competing voices. Little is known about how listeners with one normal-hearing ear might benefit from access to severely degraded audio signals that preserve only envelope information in the second ear. This study investigated whether vocoded masker-envelope information presented to one ear could improve performance for normal-hearing listeners in a multi-talker speech-identification task presented to the other ear. Target speech and speech or non-speech maskers were presented unprocessed to the left ear. The right ear received no signal, or either an unprocessed or eight-channel noise-vocoded copy of the maskers. Presenting the vocoded maskers contralaterally yielded significant masking release from same-gender speech maskers, albeit less than in the unprocessed case, but not from opposite-gender speech, stationary-noise, or modulated-noise maskers. Unmasking also occurred with as few as two vocoder channels and when an attenuated copy of the target signal was added to the maskers before vocoding. These data show that delivering masker-envelope information contralaterally generates masking release in situations where target-masker similarity impedes monaural speech-identification performance. By delivering speech-envelope information to a deaf ear, cochlear implants for single-sided deafness have the potential to produce a similar effect.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2012

Spatial Multisensory Cueing to Support Visual Target-Acquisition Performance

Julio C. Mateo; Brian D. Simpson; Robert H. Gilkey; Nandini Iyer; Douglas S. Brungart

The impact of spatial multisensory cues on target-acquisition performance was examined. Response times (RTs) obtained in the absence of spatial cues were compared to those obtained when tactile, auditory, or audiotactile cues indicated the target location. Visual scene complexity was manipulated by varying the number of visual distractors present. The results indicated that all these spatial cues effectively reduced RTs. The benefit of cueing was greater when more distractors were present and when targets were presented from more eccentric locations. Although the benefit was greatest for conditions containing auditory cues, tactile cues alone had a large benefit. No apparent advantage of audiotactile cues over auditory cues was observed, suggesting that the auditory cues provided sufficient information to support performance. Future research will explore whether audiotactile cues are more helpful when the auditory cues are degraded (e.g., when presented in noisy environments or in generic virtual auditory displays).


Journal of the Acoustical Society of America | 2012

Set-size procedures for controlling variations in speech-reception performance with a fluctuating masker

Joshua G. Bernstein; Van Summers; Nandini Iyer; Douglas S. Brungart

Adaptive signal-to-noise ratio (SNR) tracking is often used to measure speech reception in noise. Because SNR varies with performance using this method, data interpretation can be confounded when measuring an SNR-dependent effect such as the fluctuating-masker benefit (FMB) (the intelligibility improvement afforded by brief dips in the masker level). One way to overcome this confound, and allow FMB comparisons across listener groups with different stationary-noise performance, is to adjust the response set size to equalize performance across groups at a fixed SNR. However, this technique is only valid under the assumption that changes in set size have the same effect on percentage-correct performance for different masker types. This assumption was tested by measuring nonsense-syllable identification for normal-hearing listeners as a function of SNR, set size and masker (stationary noise, 4- and 32-Hz modulated noise and an interfering talker). Set-size adjustment had the same impact on performance scores for all maskers, confirming the independence of FMB (at matched SNRs) and set size. These results, along with those of a second experiment evaluating an adaptive set-size algorithm to adjust performance levels, establish set size as an efficient and effective tool to adjust baseline performance when comparing effects of masker fluctuations between listener groups.


Journal of the Acoustical Society of America | 2015

Enhancing listener strategies using a payoff matrix in speech-on-speech masking experiments.

Eric R. Thompson; Nandini Iyer; Brian D. Simpson; Gregory H. Wakefield; David E. Kieras; Douglas S. Brungart

Speech recognition was measured as a function of the target-to-masker ratio (TMR) with syntactically similar speech maskers. In the first experiment, listeners were instructed to report keywords from the target sentence. Data averaged across listeners showed a plateau in performance below 0 dB TMR when masker and target sentences were from the same talker. In this experiment, some listeners tended to report the target words at all TMRs in accordance with the instructions, while others reported keywords from the louder of the sentences, contrary to the instructions. In the second experiment, stimuli were the same as in the first experiment, but listeners were also instructed to avoid reporting the masker keywords, and a payoff matrix penalizing masker keywords and rewarding target keywords was used. In this experiment, listeners reduced the number of reported masker keywords, and increased the number of reported target keywords overall, and the average data showed a local minimum at 0 dB TMR with same-talker maskers. The best overall performance with a same-talker masker was obtained with a level difference of 9 dB, where listeners achieved near perfect performance when the target was louder, and at least 80% correct performance when the target was the quieter of the two sentences.


58th International Annual Meeting of the Human Factors and Ergonomics Society, HFES 2014 | 2014

A Cognitive Architectural Account of Two-Channel Speech Processing

David E. Kieras; Gregory H. Wakefield; Eric R. Thompson; Nandini Iyer; Brian D. Simpson

An important application of cognitive architectures is to provide human performance models that capture psychological mechanisms in a form that can be “programmed” to predict task performance of human-machine system designs. While many aspects of human performance have been successfully modeled in this approach, accounting for multi-talker speech task performance is a novel problem. This paper presents a model for performance in a two-talker task that incorporates concepts from the psychoacoustic study of speech perception, in particular, masking effects and stream formation.

Collaboration


Dive into the Nandini Iyer's collaboration.

Top Co-Authors

Avatar

Brian D. Simpson

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Douglas S. Brungart

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Griffin D. Romigh

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Matthew G. Wisniewski

State University of New York System

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge