Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Christophe N. J. Stoelinga is active.

Publication


Featured researches published by Christophe N. J. Stoelinga.


Journal of the Acoustical Society of America | 2011

Auditory discrimination of force of impact

Robert A. Lutfi; Ching-Ju Liu; Christophe N. J. Stoelinga

The auditory discrimination of force of impact was measured for three groups of listeners using sounds synthesized according to first-order equations of motion for the homogenous, isotropic bar [Morse and Ingard (1968). Theoretical Acoustics pp. 175-191]. The three groups were professional percussionists, nonmusicians, and individuals recruited from the general population without regard to musical background. In the two-interval, forced-choice procedure, listeners chose the sound corresponding to the greater force of impact as the length of the bar varied from one presentation to the next. From the equations of motion, a maximum-likelihood test for the task was determined to be of the form Δlog A + αΔ log f > 0, where A and f are the amplitude and frequency of any one partial and α = 0.5. Relative decision weights on Δ log f were obtained from the trial-by-trial responses of listeners and compared to α. Percussionists generally outperformed the other groups; however, the obtained decision weights of all listeners deviated significantly from α and showed variability within groups far in excess of the variability associated with replication. Providing correct feedback after each trial had little effect on the decision weights. The variability in these measures was comparable to that seen in studies involving the auditory discrimination of other source attributes.


Journal of the Acoustical Society of America | 2014

Factors affecting auditory streaming of random tone sequences

An-Chieh Chang; Inseok Heo; Jungmee Lee; Christophe N. J. Stoelinga; Robert A. Lutfi

As the frequency separation of A and B tones in an ABAABA tone sequence increases the tones are heard to split into separate auditory streams (fission threshold). The phenomenon is identified with our ability to ‘hear out’ individual sound sources in natural, multisource acoustic environments. One important difference, however, between natural sounds and the tone sequences used in most streaming studies is that natural sounds often vary unpredictably from one moment to the next. In the present study, fission thresholds were measured for ABAABA tone sequences made more or less predictable by sampling the frequencies, levels or durations of the tones at random from normal distributions having different values of sigma (0–800 cents, 0–8 dB, and 0–40 ms, respectively, for frequency, level, and duration). Frequency variation on average had the greatest effect on threshold, but the function relating threshold to sigma was non-monotonic; first increasing then decreasing for the largest value of sigma. Difference...


Journal of the Acoustical Society of America | 2011

Modeling manner of contact in the synthesis of impact sounds for perceptual research.

Christophe N. J. Stoelinga; Robert A. Lutfi

Impact sounds synthesized according to a physical model have increasingly become the stimulus of choice in studies of sound source perception. Few studies, however, have incorporated manner of contact in their models because of the complexity of the mechanics involved. Here a simplified model of contact is described suitable for application to perceptual research. The results of the simplified model are shown to be in good agreement with those of more comprehensive numerical methods receiving prior acoustic validation [Chaigne and Lambourg, J. Acoust. Soc. Am. 109, 1422-1432 (2001)]. The advantages of the model for applications to perceptual research are discussed.


Advances in Experimental Medicine and Biology | 2016

Individual Differences in Behavioural Decision Weights Related to Irregularities in Cochlear Mechanics

Jungmee Lee; Inseok Heo; An-Chieh Chang; Kristen Bond; Christophe N. J. Stoelinga; Robert A. Lutfi; Glenis R. Long

An unexpected finding of previous psychophysical studies is that listeners show highly replicable, individualistic patterns of decision weights on frequencies affecting their performance in spectral discrimination tasks--what has been referred to as individual listening styles. We, like many other researchers, have attributed these listening styles to peculiarities in how listeners attend to sounds, but we now believe they partially reflect irregularities in cochlear micromechanics modifying what listeners hear. The most striking evidence for cochlear irregularities is the presence of low-level spontaneous otoacoustic emissions (SOAEs) measured in the ear canal and the systematic variation in stimulus frequency otoacoustic emissions (SFOAEs), both of which result from back-propagation of waves in the cochlea. SOAEs and SFOAEs vary greatly across individual ears and have been shown to affect behavioural thresholds, behavioural frequency selectivity and judged loudness for tones. The present paper reports pilot data providing evidence that SOAEs and SFOAEs are also predictive of the relative decision weight listeners give to a pair of tones in a level discrimination task. In one condition the frequency of one tone was selected to be near that of an SOAE and the frequency of the other was selected to be in a frequency region for which there was no detectable SOAE. In a second condition the frequency of one tone was selected to correspond to an SFOAE maximum, the frequency of the other tone, an SFOAE minimum. In both conditions a statistically significant correlation was found between the average relative decision weight on the two tones and the difference in OAE levels.


MECHANICS OF HEARING: PROTEIN TO PERCEPTION: Proceedings of the 12th International Workshop on the Mechanics of Hearing | 2015

Possible role of cochlear nonlinearity in the detection of mistuning of a harmonic component in a harmonic complex

Christophe N. J. Stoelinga; Inseok Heo; Glenis R. Long; Jungmee Lee; Robert A. Lutfi; An-Chieh Chang

The human auditory system has a remarkable ability to “hear out” a wanted sound (target) in the background of unwanted sounds. One important property of sound which helps us hear-out the target is inharmonicity. When a single harmonic component of a harmonic complex is slightly mistuned, that component is heard to separate from the rest. At high harmonic numbers, where components are unresolved, the harmonic segregation effect is thought to result from detection of modulation of the time envelope (roughness cue) resulting from the mistuning. Neurophysiological research provides evidence that such envelope modulations are represented early in the auditory system, at the level of the auditory nerve. When the mistuned harmonic is a low harmonic, where components are resolved, the harmonic segregation is attributed to more centrally-located auditory processes, leading harmonic components to form a perceptual group heard separately from the mistuned component. Here we consider an alternative explanation that a...


Journal of the Acoustical Society of America | 2015

Cochlear fine structure predicts behavioral decision weights in a multitone level discrimination task

Jungmee M. Lee; Glenis R. Long; Inseok Heo; Christophe N. J. Stoelinga; Robert A. Lutfi

Listeners show highly replicable, idiosyncratic patterns of decision weights across frequency affecting their performance in multi-tone level discrimination tasks. The different patterns are attributed to peculiarities in how listeners attend to sounds. However, evidence is presented in the current study that they reflect individual differences in cochlear micromechanics, which can be evaluated using otoacoustic emissions (OAEs). Spontaneous OAEs (SOAEs) and the fine structure of stimulus-frequency OAEs (SFOAEs) were measured in a group of normal-hearing listeners. The same group of listeners performed a two-tone, sample-level discrimination task wherein the frequency of one tone was selected to correspond to a SOAE and the other was selected well away from a SOAE. Tone levels were either 50 or 30 dB SPL. The relative decision weight of the two tones for each listener and condition was estimated from a standard COSS analysis of the trial-by-trial data [Berg (1989), J. Acoust. Soc. Am. 86, 1743–1746]. A st...


Journal of the Acoustical Society of America | 2011

Evaluating models of spectral density discrimination.

Christophe N. J. Stoelinga; Robert A. Lutfi

Spectral density, defined as the number of partials comprising a sound divided by its bandwidth, has been suggested as a cue for the identification of the size and shape of sound sources [Lutfi, Auditory Perception of Sound Sources (Springer‐Verlag, New York, 2008), p. 23]. Two models have been proposed for the discrimination of spectral density. One assumes that the cue for discrimination is a change in the number of resolved partials, NRP [Stoelinga and Lutfi, Assoc. Res. Otolaryngology 31, 310 (2008)]. The other assumes that the cue is a change in the rate of power fluctuations in the sound, RPF [Hartmann et al., J. Acoust. Soc. Am. 79, 1915–1925 (1986)]. The two models were tested in separate experiments using a two‐interval, forced‐choice procedure for measuring spectral density discrimination. In the first experiment, NRP was varied from trial‐to‐trial, independently of spectral density, by clustering partials into unresolved groups. In the second experiment, RPF was varied from trial‐to‐trial by ma...


Journal of the Acoustical Society of America | 2011

Discrimination of the spectral density of multitone complexes

Christophe N. J. Stoelinga; Robert A. Lutfi

Spectral density (D), defined as the number of partials comprising a sound divided by its bandwidth, has been suggested as cue for the identification of the size and shape of sound sources. Few data are available, however, on the ability of listeners to discriminate differences in spectral density. In a cue-comparison, forced-choice procedure with feedback, three highly practiced listeners discriminated differences in the spectral density of multitone complexes varying in bandwidth (W = 500-1500 Hz), center frequency (f(c) = 500-2000 Hz), and number of tones (N = 6-31). To reduce extraneous cues for discrimination, the overall level of the complexes was roved, and the frequencies were drawn at random uniformly over a fixed bandwidth and center frequency for each presentation. Psychometric functions were obtained relating percent correct discrimination to ΔD in each condition. For D < 0.02 Hz(-1), the steepness of the functions remained constant across conditions, but for D > 0.02 Hz(-1), they increased with D. The increase, moreover, was accompanied by a reduction in the upper asymptote of the functions. The data were well fit by a model in which spectral density discrimination is determined by the frequency separation of components on an equivalent rectangular bandwidth scale, yielding a roughly constant Weber fraction of ΔD/D = 0.3.


Journal of the Acoustical Society of America | 2010

Molecular analysis of the effect of onset asynchrony in the identification of a rudimentary sound source in noise.

Robert A. Lutfi; Ching-Ju Liu; Christophe N. J. Stoelinga

The threshold for detecting a target in noise is often greater when the target is gated on simultaneously with the noise than when it is gated on after some delay [Zwicker, E. (1965). J. Acoust. Soc. Am. 37, 653–663]. One explanation is that the perceptual principle of grouping causes the target and noise with simultaneous onsets to be perceived as a single sound source. This idea was tested for a rudimentary sound source using perturbation analysis. In a two‐interval, forced‐choice procedure listeners identified as target the impact sound produced by the larger of two stretched membranes. The noise on each presentation was the impact sound of a variable‐sized plate. Grouping predicts that the decision weights on the noise should be positive when target and noise have simultaneous onsets, but that they should approach zero when target and noise are gated on asynchronously. This prediction was confirmed when the noise preceded the target by a fixed interval (100 ms), but not when it followed the target by ...


Journal of the Acoustical Society of America | 2010

A method for including manner of contact in the synthesis of impact sounds for perceptual research.

Christophe N. J. Stoelinga; Robert A. Lutfi

Impact sounds synthesized according to a physical model have increasingly become the stimulus of choice in studies of sound source perception. Few studies, however, have incorporated manner of contact in their models because of the many parameters entailed in its description. Based on the work of Zener [Phys. Rev 669–673 (1941)], we show that the seven Hertzian parameters required to completely describe the contact can be reduced to just three: force of contact, duration of contact (tc) and a non‐dimensional parameter (λ). Using only these parameters for the contact, a simplified method is presented to synthesize the impact sounds of simply supported plates. The results of the method are shown to be in good agreement with those of the more comprehensive method of Chaigne and Lambourg [J. Acoust. Soc. Am. 1422–1432 (2001)], the only such method to receive validation through acoustic measurement of plates. Psychometric functions obtained from listeners for the discrimination of tc and λ also show these para...

Collaboration


Dive into the Christophe N. J. Stoelinga's collaboration.

Top Co-Authors

Avatar

Robert A. Lutfi

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Ching-Ju Liu

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Inseok Heo

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

An-Chieh Chang

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Glenis R. Long

City University of New York

View shared research outputs
Top Co-Authors

Avatar

Jungmee Lee

Northwestern University

View shared research outputs
Top Co-Authors

Avatar

Kristen Bond

University of Wisconsin-Madison

View shared research outputs
Researchain Logo
Decentralizing Knowledge