Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Elad Sagi is active.

Publication


Featured researches published by Elad Sagi.


Acta Oto-laryngologica | 2015

Bilateral cochlear implants with large asymmetries in electrode insertion depth: implications for the study of auditory plasticity

Mario A. Svirsky; Matthew B. Fitzgerald; Elad Sagi; E. Katelyn Glassman

Abstract Conclusion: The human frequency-to-place map may be modified by experience, even in adult listeners. However, such plasticity has limitations. Knowledge of the extent and the limitations of human auditory plasticity can help optimize parameter settings in users of auditory prostheses. Objectives: To what extent can adults adapt to sharply different frequency-to-place maps across ears? This question was investigated in two bilateral cochlear implant users who had a full electrode insertion in one ear, a much shallower insertion in the other ear, and standard frequency-to-electrode maps in both ears. Methods: Three methods were used to assess adaptation to the frequency-to-electrode maps in each ear: (1) pitch matching of electrodes in opposite ears, (2) listener-driven selection of the most intelligible frequency-to-electrode map, and (3) speech perception tests. Based on these measurements, one subject was fitted with an alternative frequency-to-electrode map, which sought to compensate for her incomplete adaptation to the standard frequency-to-electrode map. Results: Both listeners showed remarkable ability to adapt, but such adaptation remained incomplete for the ear with the shallower electrode insertion, even after extended experience. The alternative frequency-to-electrode map that was tested resulted in substantial increases in speech perception for one subject in the short insertion ear.


Journal of the Acoustical Society of America | 2008

Information transfer analysis: A first look at estimation bias

Elad Sagi; Mario A. Svirsky

Information transfer analysis [G. A. Miller and P. E. Nicely, J. Acoust. Soc. Am. 27, 338-352 (1955)] is a tool used to measure the extent to which speech features are transmitted to a listener, e.g., duration or formant frequencies for vowels; voicing, place and manner of articulation for consonants. An information transfer of 100% occurs when no confusions arise between phonemes belonging to different feature categories, e.g., between voiced and voiceless consonants. Conversely, an information transfer of 0% occurs when performance is purely random. As asserted by Miller and Nicely, the maximum-likelihood estimate for information transfer is biased to overestimate its true value when the number of stimulus presentations is small. This small-sample bias is examined here for three cases: a model of random performance with pseudorandom data, a data set drawn from Miller and Nicely, and reported data from three studies of speech perception by hearing impaired listeners. The amount of overestimation can be substantial, depending on the number of samples, the size of the confusion matrix analyzed, as well as the manner in which data are partitioned therein.


Ear and Hearing | 2013

Feasibility of Real-Time Selection of Frequency Tables in an Acoustic Simulation of a Cochlear Implant

Matthew B. Fitzgerald; Elad Sagi; Tasnim A. Morbiwala; Chin-Tuan Tan; Mario A. Svirsky

Objectives: Perception of spectrally degraded speech is particularly difficult when the signal is also distorted along the frequency axis. This might be particularly important for post-lingually deafened recipients of cochlear implants (CIs), who must adapt to a signal where there may be a mismatch between the frequencies of an input signal and the characteristic frequencies of the neurons stimulated by the CI. However, there is a lack of tools that can be used to identify whether an individual has adapted fully to a mismatch in the frequency-to-place relationship and if so, to find a frequency table that ameliorates any negative effects of an unadapted mismatch. The goal of the proposed investigation is to test the feasibility of whether real-time selection of frequency tables can be used to identify cases in which listeners have not fully adapted to a frequency mismatch. The assumption underlying this approach is that listeners who have not adapted to a frequency mismatch will select a frequency table that minimizes any such mismatches, even at the expense of reducing the information provided by this frequency table. Design: Thirty-four normal-hearing adults listened to a noise-vocoded acoustic simulation of a CI and adjusted the frequency table in real time until they obtained a frequency table that sounded “most intelligible” to them. The use of an acoustic simulation was essential to this study because it allowed the authors to explicitly control the degree of frequency mismatch present in the simulation. None of the listeners had any previous experience with vocoded speech, in order to test the hypothesis that the real-time selection procedure could be used to identify cases in which a listener has not adapted to a frequency mismatch. After obtaining a self-selected table, the authors measured consonant nucleus consonant word-recognition scores with that self-selected table and two other frequency tables: a “frequency-matched” table that matched the analysis filters with the noisebands of the noise-vocoder simulation, and a “right information” table that is similar to that used in most CI speech processors, but in this simulation results in a frequency shift equivalent to 6.5 mm of cochlear space. Results: Listeners tended to select a table that was very close to, but shifted slightly lower in frequency from the frequency-matched table. The real-time selection process took on average 2 to 3 min for each trial, and the between-trial variability was comparable with that previously observed with closely related procedures. The word-recognition scores with the self-selected table were clearly higher than with the right-information table and slightly higher than with the frequency-matched table. Conclusions: Real-time self-selection of frequency tables may be a viable tool for identifying listeners who have not adapted to a mismatch in the frequency-to-place relationship, and to find a frequency table that is more appropriate for them. Moreover, the small but significant improvements in word-recognition ability observed with the self-selected table suggest that these listeners based their selections on intelligibility rather than some other factor. The within-subject variability in the real-time selection procedure was comparable with that of a genetic algorithm, and the speed of the real-time procedure appeared to be faster than either a genetic algorithm or a simplex procedure.


Journal of The American Academy of Audiology | 2012

Current and Planned Cochlear Implant Research at New York University Laboratory for Translational Auditory Research

Mario A. Svirsky; Matthew B. Fitzgerald; Arlene C. Neuman; Elad Sagi; Chin-Tuan Tan; Darlene R. Ketten; Brett A. Martin

The Laboratory of Translational Auditory Research (LTAR/NYUSM) is part of the Department of Otolaryngology at the New York University School of Medicine and has close ties to the New York University Cochlear Implant Center. LTAR investigators have expertise in multiple related disciplines including speech and hearing science, audiology, engineering, and physiology. The lines of research in the laboratory deal mostly with speech perception by hearing impaired listeners, and particularly those who use cochlear implants (CIs) or hearing aids (HAs). Although the laboratorys research interests are diverse, there are common threads that permeate and tie all of its work. In particular, a strong interest in translational research underlies even the most basic studies carried out in the laboratory. Another important element is the development of engineering and computational tools, which range from mathematical models of speech perception to software and hardware that bypass clinical speech processors and stimulate cochlear implants directly, to novel ways of analyzing clinical outcomes data. If the appropriate tool to conduct an important experiment does not exist, we may work to develop it, either in house or in collaboration with academic or industrial partners. Another notable characteristic of the laboratory is its interdisciplinary nature where, for example, an audiologist and an engineer might work closely to develop an approach that would not have been feasible if each had worked singly on the project. Similarly, investigators with expertise in hearing aids and cochlear implants might join forces to study how human listeners integrate information provided by a CI and a HA. The following pages provide a flavor of the diversity and the commonalities of our research interests.


Journal of the Acoustical Society of America | 2016

The neural encoding of formant frequencies contributing to vowel identification in normal-hearing listeners

Jong Ho Won; Kelly L. Tremblay; C. Clinard; Richard Wright; Elad Sagi; Mario A. Svirsky

Even though speech signals trigger coding in the cochlea to convey speech information to the central auditory structures, little is known about the neural mechanisms involved in such processes. The purpose of this study was to understand the encoding of formant cues and how it relates to vowel recognition in listeners. Neural representations of formants may differ across listeners; however, it was hypothesized that neural patterns could still predict vowel recognition. To test the hypothesis, the frequency-following response (FFR) and vowel recognition were obtained from 38 normal-hearing listeners using four different vowels, allowing direct comparisons between behavioral and neural data in the same individuals. FFR was employed because it provides an objective and physiological measure of neural activity that can reflect formant encoding. A mathematical model was used to describe vowel confusion patterns based on the neural responses to vowel formant cues. The major findings were (1) there were large variations in the accuracy of vowel formant encoding across listeners as indexed by the FFR, (2) these variations were systematically related to vowel recognition performance, and (3) the mathematical model of vowel identification was successful in predicting good vs poor vowel identification performers based exclusively on physiological data.


international conference on acoustics, speech, and signal processing | 2013

Validation of acoustic models of auditory neural prostheses

Mario A. Svirsky; Nai Ding; Elad Sagi; Chin-Tuan Tan; Matthew B. Fitzgerald; E. Katelyn Glassman; Keena Seward; Arlene C. Neuman

Acoustic models have been used in numerous studies over the past thirty years to simulate the percepts elicited by auditory neural prostheses. In these acoustic models, incoming signals are processed the same way as in a cochlear implant speech processor. The percepts that would be caused by electrical stimulation in a real cochlear implant are simulated by modulating the amplitude of either noise bands or sinusoids. Despite their practical usefulness these acoustic models have never been convincingly validated. This study presents a tool to conduct such validation using subjects who have a cochlear implant in one ear and have near perfect hearing in the other ear, allowing for the first time a direct perceptual comparison of the output of acoustic models to the stimulation provided by a cochlear implant.


Otology & Neurotology | 2017

A Smartphone Application for Customized Frequency Table Selection in Cochlear Implants

Daniel Jethanamest; Mahan Azadpour; Annette M. Zeman; Elad Sagi; Mario A. Svirsky

HYPOTHESIS A novel smartphone-based software application can facilitate self-selection of frequency allocation tables (FAT) in postlingually deaf cochlear implant (CI) users. BACKGROUND CIs use FATs to represent the tonotopic organization of a normal cochlea. Current CI fitting methods typically use a standard FAT for all patients regardless of individual differences in cochlear size and electrode location. In postlingually deaf patients, different amounts of mismatch can result between the frequency-place function they experienced when they had normal hearing and the frequency-place function that results from the standard FAT. For some CI users, an alternative FAT may enhance sound quality or speech perception. Currently, no widely available tools exist to aid real-time selection of different FATs. This study aims to develop a new smartphone tool for this purpose and to evaluate speech perception and sound quality measures in a pilot study of CI subjects using this application. METHODS A smartphone application for a widely available mobile platform (iOS) was developed to serve as a preprocessor of auditory input to a clinical CI speech processor and enable interactive real-time selection of FATs. The applications output was validated by measuring electrodograms for various inputs. A pilot study was conducted in six CI subjects. Speech perception was evaluated using word recognition tests. RESULTS All subjects successfully used the portable application with their clinical speech processors to experience different FATs while listening to running speech. The users were all able to select one table that they judged provided the best sound quality. All subjects chose a FAT different from the standard FAT in their everyday clinical processor. Using the smartphone application, the mean consonant-nucleus-consonant score with the default FAT selection was 28.5% (SD 16.8) and 29.5% (SD 16.4) when using a self-selected FAT. CONCLUSION A portable smartphone application enables CI users to self-select frequency allocation tables in real time. Even though the self-selected FATs that were deemed to have better sound quality were only tested acutely (i.e., without long-term experience with them), speech perception scores were not inferior to those obtained with the clinical FATs. This software application may be a valuable tool for improving future methods of CI fitting.


Journal of the Acoustical Society of America | 2017

Neural encoding of vowel formant frequency in normal-hearing listeners

Mario A. Svirsky; Jong-Ho Won; C. Clinard; Richard Wright; Elad Sagi; Kelly L. Tremblay

Physiological correlates of speech acoustics are particularly important to study in humans because it is uncertain whether animals process speech the same way humans do. Studying the physiology of speech processing in humans, however, typically requires the use of noninvasive physiological measures. This is what we attempted in a recent study (Won, Tremblay, Clinard, Wright, Sagi, and Svirsky, JASA 2016) which examined the hypothesis that neural representations of formant frequencies may help predict vowel recognition. To test the hypothesis, the frequency-following response (FFR) and vowel recognition were obtained from 38 normal-hearing listeners using four different vowels. This allowed direct comparisons between behavioral and neural data in the same individuals. FFR was used because it reflects temporal encoding of formant frequencies below about 1500 Hz. Four synthetic vowels with formant frequencies below 1500 Hz were used. Duration was 70 ms for all vowels to eliminate temporal cues and to make id...


Journal of the Acoustical Society of America | 2017

Computational models of speech perception by cochlear implant users

Mario A. Svirsky; Elad Sagi

Cochlear implant (CI) users have access to fewer acoustic cues than normal hearing listeners, resulting in less than perfect identification of phonemes (vowels and consonants), even in quiet. This makes it possible to develop models of phoneme identification based on CI users’ ability to discriminate along a small set of linguistically-relevant continua. Vowel and consonant confusions made by CI users provide a very rich platform to test such models. The preliminary implementation of these models used a single perceptual dimension and was closely related to the model of intensity resolution developed jointly by Nat Durlach and Lou Braida. Extensions of this model to multiple dimensions, incorporating aspects of Lou’s novel work on “crossmodal integration,” have successfully explained patterns of vowel and consonant confusions; perception of “conflicting-cue” vowels; changes in vowel identification as a function of different intensity mapping curves and frequency-to-electrode maps; adaptation (or lack ther...


Journal of the Acoustical Society of America | 2008

Integration of acoustic cues for consonant identification by cochlear implant users

Mario A. Svirsky; Elad Sagi

Users of cochlear implants (CIs) obtain substantial benefit from their devices, but their speech perception is (on average) less than perfect and there are significant individual differences among patients. In particular, their consonant identification levels are lower than those of normal hearing listeners, or even most users of hearing aids. We have developed a simple quantitative model (multidimensional Phoneme Identification or MPI model) to predict consonant confusion matrices for individual cochlear implant users based on their discrimination of three consonantal acoustic cues: place of stimulation in the cochlea, silent gap duration, and percentage of energy above 800 Hz. Despite using only three degrees of freedom (i.e., JND for each cue) the model can explain most of the consonant pairs that are confused (or not confused) by individual CI users. However, when a listeners measured JNDs are used as inputs to the model, the predictions that result tend to have a higher percentage of correct respons...

Collaboration


Dive into the Elad Sagi's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

C. Clinard

James Madison University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge