Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Guido F. Smoorenburg is active.

Publication


Featured researches published by Guido F. Smoorenburg.


Journal of the Acoustical Society of America | 1992

Speech reception in quiet and in noisy conditions by individuals with noise‐induced hearing loss in relation to their tone audiogram

Guido F. Smoorenburg

Tone thresholds and speech-reception thresholds were measured in 200 individuals (400 ears) with noise-induced hearing loss. The speech-reception thresholds were measured in a quiet condition and in noise with a speech spectrum at levels of 35, 50, 65, and 80 dBA. The tone audiograms could be described by three principal components: hearing loss in the regions above 3 kHz, from 1 to 3 kHz and below 1 kHz; the speech thresholds could be described by two components: speech reception in quiet and speech reception in noise at 50-80 dBA. Hearing loss above 1 kHz was related to speech reception in noise; hearing loss at and below 1 kHz to speech reception in quiet. The correlation between the speech thresholds in quiet and in noise was only R = 0.45. An adequate predictor of the speech threshold in noise, the primary factor in the hearing handicap, was the pure-tone average at 2 and 4 kHz (PTA2,4, R = 0.72). The minimum value of the prediction error for any tone-audiometric predictor of this speech threshold was 1.2 dB (standard deviation). The prediction could not be improved by taking into account the critical ratio for low-frequency noise nor by its upward spread of masking. The prediction error is due to measurement error and to a factor common to both ears. The latter factor is ascribed to cognitive skill in speech reception. Hearing loss above 10 to 15 dB HL (hearing level) already shows an effect on the speech threshold in noise, a noticeable handicap is found at PTA2,4 = 30 dB HL.


Journal of the Acoustical Society of America | 1972

Combination Tones and Their Origin

Guido F. Smoorenburg

Combination tones corresponding to f1 − k (f2 − f1), with k a small positive integer, are often audible during stimulation by the two frequency components f1 and f2 (f1 < f2). In this paper primarily the cubic difference tone (CDT) 2f1 − f2 (k = 1) is studied. The research reported is directed at the type of nonlinearity generating the CDT, and at the site of CDT generation. The first experiment shows that the generation of the CDT is affected by a dip (a threshold elevation in a narrow frequency region). The CDT was perceived only when the level of f2 exceeded the elevated threshold. In the second experiment the cancellation method is used. The results suggest that the high‐frequency slope of the pattern of stimulation upon which the nonlinearity operates is comparable with the slope revealed in masking experiments. In the last experiments the cancellation method is reconsidered. Estimates of the CDT level found by various other measuring methods, in which the probe tone was presented nonsimultaneously w...


Journal of the Acoustical Society of America | 1970

Pitch Perception of Two‐Frequency Stimuli

Guido F. Smoorenburg

Experiments on the pitch of complex tones produced by two frequency components are reported. An exploratory experiment revealed that subjects perceive the pitches of individual part‐tones or the stimulus as a whole with a pitch corresponding to about the fundamental frequency. The latter pitch was investigated more thoroughly. In case of adjacent harmonics, the pitch corresponded to the (absent) fundamental frequency. A shift of the frequencies away from such a harmonic situation while maintaining a constant frequency difference resulted in a pitch shift. For higher harmonic numbers, the pitch shift was larger than could be met by current theories. The large pitch shift was explained by taking into account an auditory nonlinearity which generates combination tones of the type f1−k(f2−f1). Sound‐pressure level dependence of the pitch shift could be explained in the same manner. When the combination tones were masked, the large pitch shift diminished. With regard to the pitch mechanism, the results suggeste...


Journal of the Acoustical Society of America | 1972

Audibility Region of Combination Tones

Guido F. Smoorenburg

This paper describes research on combination tones of the type f1 − k (f2 − f1. Such tones are audible during stimulation by the two frequency components f1 and f2, with f1 < f2. The results of the experiments suggest that these combination tones are audible only in a restricted frequency region below f1. The lower limit of this “audibility region,” for the combination tone 2f1 − f2 (k = 1), was determined primarily by the level of the lower stimulus component L1 and additionally by L2. The width of the audibility region appeared to be highly subject dependent. The lowest stimulus level for which 2f1 − f2 was audible was 15–20 dB SL if both stimulus components had equal levels. If the lower component was set at 40 dB, then the higher one could be reduced to 4 dB before 2f1 − f2 reached threshold. Higher‐order combination tones could be perceived up to k = 5 or 6. At 40 dB SL, the lower limit of the audibility region was approximately the same regardless of the order (i.e., independent of k). The width of ...


Journal of the Acoustical Society of America | 1976

Correlates of combination tones observed in the response of neurons in the anteroventral cochlear nucleus of the cat.

Guido F. Smoorenburg; Mary Morton Gibson; Leonard M. Kitzes; Jerzy E. Rose; Joseph E. Hind

Neurons in the anteroventral cochlear nucleus of the cat respond to combination tones of the forms f2−f1 and f1−n (f2−f1), where n is a small positive integer 1, 2, 3,.... The most easily observed combination tones are f2−f1 and 2f1−f2. In general, a combination tone is effective if three conditions are fulfilled: (1) the combination‐tone frequency must fall within the pure‐tone response area of the neuron; (2) the intensity levels of the primaries must be appropriate; and (3) the separation of the primary frequencies cannot be unduly large. For any form of combination tone, a combination‐tone response area could be plotted by fixing f1 at some level and varying f2 in small steps. The actual frequency of the combination tone could be determined from the timing of the discharges for all neurons whose discharges are phase locked. The combination‐tone response areas indicate that the response to a given form of combination tone is optimal when the combination frequency is at or near the best frequency of the...


Journal of the Acoustical Society of America | 1993

A model for context effects in speech recognition

Adelbert W. Bronkhorst; Arjan J. Bosman; Guido F. Smoorenburg

A model is presented that quantifies the effect of context on speech recognition. In this model, a speech stimulus is considered as a concatenation of a number of equivalent elements (e.g., phonemes constituting a word). The model employs probabilities that individual elements are recognized and chances that missed elements are guessed using contextual information. Predictions are given of the probability that the entire stimulus, or part of it, is reproduced correctly. The model can be applied to both speech recognition and visual recognition of printed text. It has been verified with data obtained with syllables of the consonant-vowel-consonant (CVC) type presented near the reception threshold in quiet and in noise, with the results of an experiment using orthographic presentation of incomplete CVC syllables and with results of word counts in a CVC lexicon. A remarkable outcome of the analysis is that the cues which occur only in spoken language (e.g., coarticulatory cues) seem to have a much greater influence on recognition performance when the stimuli are presented near the threshold in noise than when they are presented near the absolute threshold. Demonstrations are given of further predictions provided by the model: word recognition as a function of signal-to-noise ratio, closed-set word recognition, recognition of interrupted speech, and sentence recognition.


Journal of the Acoustical Society of America | 1994

VISEME CLASSIFICATIONS OF DUTCH CONSONANTS AND VOWELS

Nic van Son; Tirtsa M.I. Huiskamp; Arjan J. Bosman; Guido F. Smoorenburg

Videotaped lists of meaningless Dutch syllables were presented in quiet to four subject groups, differing with respect to their knowledge of and experience with lipreading (lipreading expertise). Syllables consisted of all Dutch consonants within three vowel contexts, and of all Dutch vowels within four consonant contexts. Three speakers pronounced all syllable lists. The aim of the research was (1) to establish viseme classifications of Dutch vowels and consonants; (2) to interpret the visual‐perceptual dimensions underlying this classification and relate them to acoustic‐phonetic parameters; (3) to establish the effect of lipreading expertise on the classification of visually similar phonemes (visemes). In general, viseme classification proved very constant with different subject groups: Lipreading expertise is not related to viseme recognition. Important visual features in consonant lipreading are lip articulation, degree of oral cavity opening, and place of articulation, leading to the following visem...


Journal of the Acoustical Society of America | 1990

What type of force does the cochlear amplifier produce

Paul J. Kolston; Egbert de Boer; Max A. Viergever; Guido F. Smoorenburg

Recent experimental measurements suggest that the mechanical displacement of the basilar membrane (BM) near threshold in a viable mammalian cochlea is greater than 10(-8) cm, for a stimulus sound-pressure level at the eardrum of 20 microPa. The associated response peak is very sensitive to the physiological condition of the cochlea. In the formulation of all recent cochlear models, it has been explicitly assumed that this peak is produced by the cochlear amplifier injecting a large amount of energy into the cochlea, thereby altering the real component of the BM impedance. In this paper, a new cochlear model is described which produces a realistic response by assuming that the cochlear amplifier force acts at a phase such that the main effect is to reduce the imaginary component of the BM impedance. In this new model, the magnitude of the cochlear amplifier force required to produce a realistic response is much smaller than in the previous models. It is suggested that future experimental investigations should attempt to determine both the magnitude and the phase of the forces associated with the cochlear amplifier.


Journal of the Acoustical Society of America | 1992

Signal detection in temporally modulated and spectrally shaped maskers

Willem A. C. van den Brink; Tammo Houtgast; Guido F. Smoorenburg

The first part of this paper presents several experiments on signal detection in temporally modulated noise, yielding a general approach toward the concept of comodulation masking release (CMR). Measurements were made on masked thresholds of both long- and short-duration, narrow-band signals presented in a 100% sinusoidally amplitude-modulated (SAM) noise masker (modulation frequency 32 Hz), as a function of masker bandwidth from 1/3 oct up to 13/3 octs, while the masker band was geometrically centered at signal frequency. With the short-duration signals placed in the valley of the masker, a substantial CMR (i.e., a decrease of masked threshold with increasing masker bandwidth) was found, whereas for the long-duration signals CMR was smaller. Furthermore, investigations were carried out to determine whether CMR changes when the bandwidth of the signals, consisting of bandpass impulse responses, is increased. The data indicate that substantial CMR remains even when all masker bands contain a signal component, thus minimizing across-channel differences. This finding is not in line with current models accounting for the CMR phenomenon. The second part of this paper concerns signal detection in spectrally shaped noise. Also investigated was whether release from masking occurs for the detection of a pure-tone signal at a valley or a peak of a simultaneously presented masking noise with a sinusoidally rippled power spectrum, when this masker was preceded and followed by a second noise (temporal flanking burst) with an identical spectral shape as the on-signal noise. Similar to CMR effects for temporal modulations, the data indicate that coshaping masking release (CSMR) occurs when the signal is placed in a valley of the spectral envelope of the masker, whereas no release from masking is found when the signal is placed at a peak of the spectral envelope of the masker. The implications of these experiments for measures of spectral and temporal resolution are discussed.


Journal of the Acoustical Society of America | 1969

Proposed Explanation of Synchrony of Auditory‐Nerve Impulses to Combination Tones

E. de Boer; P. Kuyper; Guido F. Smoorenburg

When the acoustic stimulus consists of two sinusoidal components (frequencies f1 and f2), the impulses of a single auditory‐nerve fiber can show partial synchrony with either of these components (provided the frequencies stay below approx 5 kHz). Often a synchrony with an externally generated combination tone of frequency 2f1−f2 can be detected as well. It is shown in this letter than such a behavior is a logical consequence of the assumption that nerve impulses are elicited at the peaks of the stimulus waveform. In addition, it is demonstrated that only special combination tones will show this phenomenon. Many of the experimental results on this type of synchrony can be explained in this way. It is thus not necessary to assume that a special physiological mechanism is responsible for the observed synchrony per se. But deviations from the basic properties derived here should be observed closely, because these do give useful indications about cochlear physiology.

Collaboration


Dive into the Guido F. Smoorenburg's collaboration.

Top Co-Authors

Avatar

Paul J. Kolston

Delft University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Max A. Viergever

Delft University of Technology

View shared research outputs
Top Co-Authors

Avatar

P. Kuyper

University of Amsterdam

View shared research outputs
Top Co-Authors

Avatar

Jerzy E. Rose

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Joseph E. Hind

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge