Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Stanley Sheft is active.

Publication


Featured researches published by Stanley Sheft.


Journal of the Acoustical Society of America | 1996

A time domain description for the pitch strength of iterated rippled noise

William A. Yost; Roy D. Patterson; Stanley Sheft

Two versions of a cascaded add, attenuate, and delay circuit were used to generate iterated rippled noise (IRN) stimuli. IRN stimuli produce a repetition pitch whose strength relative to the noise can be varied by changing the type of circuit, the attenuation, or the number of iterations in the circuit. Listeners were asked to discriminate between various pairs of IRN stimuli which differed in the type of network used to generate the sounds or the number of iterations (n = 1, 2, 3, 4, 7, and 9). Performance was determined for IRN stimuli generated with delays of 2, 4, and 8 ms and for four bandpass filter conditions (0-2000, 250-2000, 500-2000, and 750-2000 Hz). Some IRN stimuli were extremely difficult to discriminate despite relatively large spectral differences, while other IRN stimuli produced readily discriminable changes in perception, despite small spectral differences. these contrasting results are inconsistent with simple spectral explanations for the perception of IRN stimuli. An explanation based on the first peak of the autocorrelation function of IRN stimuli is consistent with the results. Simulations of the processing performed by the peripheral auditory system (i.e., interval histograms and correlograms) produce results which are consistent with the involvement of these temporal processes in the perception of IRN stimuli.


Journal of the Acoustical Society of America | 1989

Across‐critical‐band processing of amplitude‐modulated tones

William A. Yost; Stanley Sheft

Two experiments using two-tone sinusoidally amplitude-modulated stimuli were conducted to assess cross-channel effects in processing low-frequency amplitude modulation. In experiment I, listeners were asked to discriminate between two sets of two-tone amplitude-modulated complexes. In one set, the modulation phase of the lower frequency carrier tone was different from that of the upper frequency carrier tone. In the other stimulus set, both amplitude-modulated carriers had the same modulator phase. The amount of phase shift required to discriminate between the two stimulus sets was determined as a function of the separation between the two carriers, modulation depth, and modulation frequency. Listeners could discriminate a 50 degrees-60 degrees phase shift between the modulated envelopes for tones separated by more than a critical band. In experiment II, the modulation depth required to detect modulation of a probe carrier was measured in the presence of an amplitude-modulated masker. The threshold for detecting probe modulation was determined as a function of the separation between the masker and probe carriers, the phase difference between the masker and probe modulators, and masker modulation depth (in all conditions, the rate of probe and masker modulation was 10 Hz). The threshold for detecting probe modulation was raised substantially when the masker tone was also modulated. The results are consistent with theories suggesting that amplitude modulation helps form auditory objects from complex sound fields.


Journal of the Acoustical Society of America | 1989

Modulation interference in detection and discrimination of amplitude modulation.

William A. Yost; Stanley Sheft; Janie Opie

Two experiments were conducted to assess the effect of the rate of sinusoidal amplitude modulation (SAM) of a masker tone on detection of SAM of a probe tone (experiment 1) or on SAM-rate discrimination for the probe tone (experiment 2). When modulated at the same rate as the probe, the masker interfered with both the detection of probe modulation and the discrimination of the rate of probe modulation. The interference was obtained when the masker was either higher or lower in frequency than the probe (the probe and masker were separated by 2 oct). The amount of interference in detecting probe modulation (experiment 1) decreased as the common base rate of modulation was increased from 5 to 200 Hz. For rate discrimination (experiment 2), the amount of interference remained approximately the same for base rates of 2-40 Hz, the range over which rate discrimination was measured. In both experiments, the amount of interference was reduced when the masker was modulated at a different rate than the probe.


Attention Perception & Psychophysics | 1996

A SIMULATED COCKTAIL PARTY WITH UP TO THREE SOUND SOURCES

William A. Yost; Raymond H. Dye; Stanley Sheft

Listeners identified spoken words, letters, and numbers and the spatial location of these utterances in three listening conditions as a function of the number of simultaneously presented utterances. The three listening conditions were a normal listening condition, in which the sounds were presented over seven possible loudspeakers to a listener seated in a sound-deadened listening room; a one-headphone listening condition, in which a single microphone that was placed in the listening room delivered the sounds to a single headphone worn by the listener in a remote room; and a stationary KEMAR listening condition, in which binaural recordings from an acoustic manikin placed in the listening room were delivered to a listener in the remote room. The listeners were presented one, two, or three simultaneous utterances. The results show that utterance identification was better in the normal listening condition than in the one-headphone condition, with the KEMAR listening condition yielding intermediate levels of performance. However, the differences between listening in the normal and in the one-headphone conditions were much smaller when two, rather than three, utterances were presented at a time. Localization performance was good for both the normal and the KEMAR listening conditions and at chance for the one-headphone condition. The results suggest that binaural processing is probably more important for solving the “cocktail party” problem when there are more than two concurrent sound sources.


Journal of the Acoustical Society of America | 1996

Responses of ventral cochlear nucleus units in the chinchilla to amplitude modulation by low‐frequency, two‐tone complexes

William P. Shofner; Stanley Sheft; Sandra J. Guzman

For a tone that is amplitude modulated by two tones (fmod1 and fmod2), neither the stimulus waveform nor the half-wave rectified waveform has spectral energy at the envelope beat frequency (fmod2-fmod1). The response of ventral cochlear nucleus units in the chinchilla were recorded for best frequency tones that were amplitude modulated by low-frequency, two-tone complexes. Fourier analysis of poststimulus time histograms shows spectral peaks at fmod2-fmod1 in addition to the peaks at fmod1 and fmod2. The peaks in the neural spectra arise from compressive nonlinearities in the auditory system. The magnitudes of these spectral peaks are measures of synchrony at each frequency component. For all units, synchrony at fmod1 and fmod2 is greater than the synchrony at fmod2-fmod1. For a given unit, synchrony at fmod1 and fmod2 remains relatively constant as a function of overall level, whereas synchrony at fmod2-fmod1 decreases as the level increases. Synchrony was quantified in terms of the Rayleigh statistic (z), which is a measure of the statistical significance of the phase locking. In terms of z, phase locking at fmod1 and fmod2 is largest in chopper units, whereas onset-chopper units and primarylike units having sloping saturation in their rate-level functions show the smallest amount of phase locking. Phase locking at fmod2-fmod1 is also largest in chopper units, and smallest in onset-chopper units and primarylike units with sloping saturation.


Journal of the Acoustical Society of America | 1998

The role of the envelope in processing iterated rippled noise

William A. Yost; Roy D. Patterson; Stanley Sheft

Iterated rippled noise (IRN) is generated by a cascade of delay and add (the gain after the delay is 1.0) or delay and subtract (the gain is -1.0) operations. The delay and add/subtract operations impart a spectral ripple and a temporal regularity to the noise. The waveform fine structure is different in these two conditions, but the envelope can be extremely similar. Four experiments were used to determine conditions in which the processing of IRN stimuli might be mediated by the waveform fine structure or by the envelope. In experiments 1 and 3 listeners discriminated among three stimuli in a single-interval task: IRN stimuli generated with the delay and add operations (g = 1.0), IRN stimuli generated using the delay and subtract operations (g = -1.0), and a flat-spectrum noise stimulus. In experiment 2 the listeners were presented two IRN stimuli that differed in delay (4 vs 6 ms) and a flat-spectrum noise stimulus that was not an IRN stimulus. In experiments 1 and 2 both the envelope and waveform fine structure contained the spectral ripple and temporal regularity. In experiment 3 only the envelope had this spectral and temporal structure. In all experiments discrimination was determined as a function of high-pass filtering the stimuli, and listeners could discriminate between the two IRN stimuli up to frequency regions as high as 4000-6000 Hz. Listeners could discriminate the IRN stimuli from the flat-spectrum noise stimulus at even higher frequencies (as high as 8000 Hz), but these discriminations did not appear to depend on the pitch of the IRN stimuli. A control experiment (fourth experiment) suggests that IRN discriminations in high-frequency regions are probably not due entirely to low-frequency nonlinear distortion products. The results of the paper imply that pitch processing of IRN stimuli is based on the waveform fine structure.


Ear and Hearing | 2012

Effects of age and hearing loss on the relationship between discrimination of stochastic frequency modulation and speech perception.

Stanley Sheft; Valeriy Shafiro; Christian Lorenzi; Rachel McMullen; Caitlin Farrell

Objective: The frequency modulation (FM) of speech can convey linguistic information and also enhance speech-stream coherence and segmentation. The purpose of the present study was to use a clinically oriented approach to examine the effects of age and hearing loss on the ability to discriminate between stochastic patterns of low-rate FM and determine whether difficulties in speech perception experienced by older listeners relate to a deficit in this ability. Design: Data were collected from 18 normal-hearing young adults, and 18 participants who were at least 60 years old, nine of whom had normal hearing and the remaining nine who had a mild-to-moderate sensorineural hearing loss. Using stochastic frequency modulators derived from 5-Hz low-pass noise applied to a 1-kHz carrier, discrimination thresholds were measured in terms of frequency excursion (&Dgr;F) both in quiet and with a speech-babble masker present, stimulus duration, and signal-to-noise ratio (SNRFM) in the presence of a speech-babble masker. Speech-perception ability was evaluated using Quick Speech-in-Noise (QuickSIN) sentences in four-talker babble. Results: Results showed a significant effect of age but not of hearing loss among the older listeners, for FM discrimination conditions with masking present (&Dgr;F and SNRFM). The effect of age was not significant for the FM measures based on stimulus duration. &Dgr;F and SNRFM were also the two conditions for which performance was significantly correlated with listener age when controlling for effect of hearing loss as measured by pure-tone average. With respect to speech-in-noise ability, results from the SNRFM condition were significantly correlated with QuickSIN performance. Conclusions: Results indicate that aging is associated with reduced ability to discriminate moderate-duration patterns of low-rate stochastic FM. Furthermore, the relationship between QuickSIN performance and the SNRFM thresholds suggests that the difficulty experienced by older listeners with speech-in-noise processing may, in part, relate to diminished ability to process slower fine-structure modulation at low sensation levels. Results thus suggest that clinical consideration of stochastic FM discrimination measures may offer a fuller picture of auditory-processing abilities.


International Journal of Audiology | 2010

Perception of temporal fine-structure cues in speech with minimal envelope cues for listeners with mild-to-moderate hearing loss

Marine Ardoint; Stanley Sheft; Pierre Fleuriot; Stéphane Garnier; Christian Lorenzi

Abstract The contribution of temporal fine-structure (TFS) cues to consonant identification was compared for seven young adults with normal hearing and five young adults with mild-to-moderate hearing loss and flat, high- or low-frequency gently sloping audiograms. Nonsense syllables were degraded using two schemes (PM: phase modulation; FM: frequency modulation) designed to remove temporal envelope (E) cues while preserving TFS cues in 16 0.35-octave-wide frequency bands spanning the range of 80 to 8020 Hz. For both schemes, hearing-impaired listeners performed significantly above chance level (PM: 36%; FM: 31%; chance level: 6.25%), but more poorly than normal-hearing listeners (PM: 80%; FM: 65%). Three hearing-impaired listeners showed normal or near-normal reception of nasality information. These results indicate that for mild to moderate levels of hearing loss, cochlear damage reduces but does not abolish the ability to use the TFS cues of speech. The deficits observed for both schemes in hearing-impaired listeners suggest involvement of factors other than only poor reconstruction of temporal envelope from temporal fine structure. Sumario Se comparó la contribución de claves temporales de estructura fina (TFS) para la identificación de consonantes en seis adultos jóvenes con audición normal y cinco adultos jóvenes con hipoacusia superficial a moderada con audiogramas planos, o con caídas suaves en graves o en agudos. Se degradaron sílabas sin sentido utilizando dos esquemas (PM: modulación de fase; FM: modulación de frecuencia) diseñados para remover las claves de envoltura temporal (E) mientras que se preservaron las caves TFS en 16 bandas con anchos de frecuencia de 0.35 de octava, espaciando el rango de 80 a 8020Hz. Para ambos esquemas, los oyentes con hipoacusia se desempeñaron significativamente por encima del nivel de azar (PM: 36%; FM: 31%; Nivel de azar : 6.5%) pero mucho menos que los normoyentes (PM: 80%; FM: 65%). Tres sujetos con hipoacusia mostraron una recepción normal o casi normal de la información de nasalidad. Estos resultados indican que para la hipoacusia superficial a moderada el daño coclear reduce pero no cancela la habilidad para usar las claves TFS del lenguaje. Los déficits observados para ambos esquemas en sujetos hipoacúsicos sugiere una participación de factores diferentes a la pobre reconstrucción de la envoltura temporal de la estructura fina.


Hearing Research | 1994

Modulation Detection Interference: Across-frequency processing and auditory grouping

William A. Yost; Stanley Sheft

Modulation Detection Interference (MDI) is the loss of sensitivity in processing amplitude modulation of a probe tone when a masker is similarly modulated. MDI was measured in four experiments to investigate two past claims concerning MDI: 1) That MDI represents across-spectral processing, and 2) that MDI is the consequence of the auditory system using common patterns of amplitude modulation to group spectral components into a single auditory source. Experiment I studied MDI when the envelope phase of the masker and probe modulators were different and was used to address the issue of the extent to which MDI is a consequence of spectral grouping based on common amplitude modulation. Measures of MDI for conditions in which the frequency separation between the probe and masker carriers was varied (Experiment II), estimates of modulation depth discrimination (Experiment III), and signal detection thresholds for brief sinusoidal signals masked by amplitude modulated tones (Experiment IV) were all used to address issues related to across-spectral processing of amplitude modulation. The conclusions of these studies is that MDI is largely an across-frequency phenomenon and that the role of auditory grouping based on a common pattern of modulation can not be ruled out as having a relationship to MDI.


Journal of the Acoustical Society of America | 1990

A comparison among three measures of cross‐spectral processing of amplitude modulation with tonal signals

William A. Yost; Stanley Sheft

Results were obtained from three paradigms used to study cross-spectral processing of envelope modulation [comodulation masking release (CMR), comodulation detection difference (CDD), and modulation detection interference (MDI)]. When tonal carriers separated by two octaves (flanking tone at 1000 Hz and target tone at 4000 Hz) were amplitude modulated at 20 Hz, there was no evidence of a cMR or CDD effect, but there was substantial MDI.

Collaboration


Dive into the Stanley Sheft's collaboration.

Researchain Logo
Decentralizing Knowledge