An-Chieh Chang
University of Wisconsin-Madison
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by An-Chieh Chang.
Journal of the Acoustical Society of America | 2012
Robert A. Lutfi; An-Chieh Chang; Jacob Stamas; Lynn Gilbertson
There has been growing interest in recent years in masking that appears to have its origin at a central level of the auditory nervous system--so-called informational masking (IM). Masker uncertainty and target-masker similarity have been identified as the two major factors affecting IM; however, no theoretical framework currently exists that would give precise meaning to these terms necessary to evaluate their relative importance or model their effects. The present paper offers a first attempt at such a framework constructed within the doctrines of the theory of signal detection.
Journal of the Acoustical Society of America | 2015
An-Chieh Chang; Robert A. Lutfi; Jungmee Lee
Stimulus uncertainty is known to critically affect auditory masking, but its influence on auditory streaming has been largely ignored. Standard ABA-ABA tone sequences were made increasingly uncertain by increasing the sigma of normal distributions from which the frequency, level, or duration of tones were randomly drawn. Consistent with predictions based on a model of masking by Lutfi, Gilbertson, Chang, and Stamas [J. Acoust. Soc. Am. 134, 2160-2170 (2013)], the frequency difference for which A and B tones formed separate streams increased as a linear function of sigma in tone frequency but was much less affected by sigma in tone level or duration.
Journal of the Acoustical Society of America | 2014
An-Chieh Chang; Inseok Heo; Jungmee Lee; Christophe N. J. Stoelinga; Robert A. Lutfi
As the frequency separation of A and B tones in an ABAABA tone sequence increases the tones are heard to split into separate auditory streams (fission threshold). The phenomenon is identified with our ability to ‘hear out’ individual sound sources in natural, multisource acoustic environments. One important difference, however, between natural sounds and the tone sequences used in most streaming studies is that natural sounds often vary unpredictably from one moment to the next. In the present study, fission thresholds were measured for ABAABA tone sequences made more or less predictable by sampling the frequencies, levels or durations of the tones at random from normal distributions having different values of sigma (0–800 cents, 0–8 dB, and 0–40 ms, respectively, for frequency, level, and duration). Frequency variation on average had the greatest effect on threshold, but the function relating threshold to sigma was non-monotonic; first increasing then decreasing for the largest value of sigma. Difference...
Trends in hearing | 2016
An-Chieh Chang; Robert A. Lutfi; Jungmee Lee; Inseok Heo
Research on hearing has long been challenged with understanding our exceptional ability to hear out individual sounds in a mixture (the so-called cocktail party problem). Two general approaches to the problem have been taken using sequences of tones as stimuli. The first has focused on our tendency to hear sequences, sufficiently separated in frequency, split into separate cohesive streams (auditory streaming). The second has focused on our ability to detect a change in one sequence, ignoring all others (auditory masking). The two phenomena are clearly related, but that relation has never been evaluated analytically. This article offers a detection-theoretic analysis of the relation between multitone streaming and masking that underscores the expected similarities and differences between these phenomena and the predicted outcome of experiments in each case. The key to establishing this relation is the function linking performance to the information divergence of the tone sequences, DKL (a measure of the statistical separation of their parameters). A strong prediction is that streaming and masking of tones will be a common function of DKL provided that the statistical properties of sequences are symmetric. Results of experiments are reported supporting this prediction.
Journal of the Acoustical Society of America | 2016
An-Chieh Chang; Robert A. Lutfi; Jungmee Lee
Research on hearing has long been challenged with understanding our exceptional ability to “hear out” individual sounds in a mixture (the so-called cocktail party problem). Two general approaches to the problem have been taken using sequences of tones as stimuli. The first has focused on our tendency to hear sequences, sufficiently separated in frequency, split into separate cohesive streams (auditory streaming). The second has focused on our ability to detect a change in one sequence, ignoring all others (auditory masking). The two phenomena are clearly related, but that relation has never been specifically evaluated. The present paper offers a detection-theoretic analysis of the relation between multitone streaming and masking that underscores the expected similarities and differences between these phenomena and the predicted outcome of experiments in each case. The underlying premise of the analysis is that streaming is auditory systems way of maximizing the likelihood that sounds from separate source...
Advances in Experimental Medicine and Biology | 2016
Jungmee Lee; Inseok Heo; An-Chieh Chang; Kristen Bond; Christophe N. J. Stoelinga; Robert A. Lutfi; Glenis R. Long
An unexpected finding of previous psychophysical studies is that listeners show highly replicable, individualistic patterns of decision weights on frequencies affecting their performance in spectral discrimination tasks--what has been referred to as individual listening styles. We, like many other researchers, have attributed these listening styles to peculiarities in how listeners attend to sounds, but we now believe they partially reflect irregularities in cochlear micromechanics modifying what listeners hear. The most striking evidence for cochlear irregularities is the presence of low-level spontaneous otoacoustic emissions (SOAEs) measured in the ear canal and the systematic variation in stimulus frequency otoacoustic emissions (SFOAEs), both of which result from back-propagation of waves in the cochlea. SOAEs and SFOAEs vary greatly across individual ears and have been shown to affect behavioural thresholds, behavioural frequency selectivity and judged loudness for tones. The present paper reports pilot data providing evidence that SOAEs and SFOAEs are also predictive of the relative decision weight listeners give to a pair of tones in a level discrimination task. In one condition the frequency of one tone was selected to be near that of an SOAE and the frequency of the other was selected to be in a frequency region for which there was no detectable SOAE. In a second condition the frequency of one tone was selected to correspond to an SFOAE maximum, the frequency of the other tone, an SFOAE minimum. In both conditions a statistically significant correlation was found between the average relative decision weight on the two tones and the difference in OAE levels.
MECHANICS OF HEARING: PROTEIN TO PERCEPTION: Proceedings of the 12th International Workshop on the Mechanics of Hearing | 2015
Christophe N. J. Stoelinga; Inseok Heo; Glenis R. Long; Jungmee Lee; Robert A. Lutfi; An-Chieh Chang
The human auditory system has a remarkable ability to “hear out” a wanted sound (target) in the background of unwanted sounds. One important property of sound which helps us hear-out the target is inharmonicity. When a single harmonic component of a harmonic complex is slightly mistuned, that component is heard to separate from the rest. At high harmonic numbers, where components are unresolved, the harmonic segregation effect is thought to result from detection of modulation of the time envelope (roughness cue) resulting from the mistuning. Neurophysiological research provides evidence that such envelope modulations are represented early in the auditory system, at the level of the auditory nerve. When the mistuned harmonic is a low harmonic, where components are resolved, the harmonic segregation is attributed to more centrally-located auditory processes, leading harmonic components to form a perceptual group heard separately from the mistuned component. Here we consider an alternative explanation that a...
Journal of the Acoustical Society of America | 2013
An-Chieh Chang; Robert A. Lutfi
Recent results from our lab show the masking of one tone sequence by another to be strongly related to the information divergence of sequences, a measure of statistical separation of signals [Gilbertson et al., POMA 19, 050028 (2013)]. The present study was undertaken to determine if the same relation holds for the auditory streaming of tone sequences. An adaptive procedure was used to measure thresholds for streaming of ABAABA tone sequences wherein the frequencies of the A and B tones varied independently of one another (r = 0) or covaried within the sequence (r = 1). The procedure adapted on the difference Δ in the mean frequencies of A and B tones (normally distributed in cents) with the mean frequency of A tones fixed at 1000 Hz. For most listeners, Δ increased monotonically with increases in the variance of the tone frequencies (σ = 0-800 cents), but did not differ significantly for r = 0 and r = 1. For other listeners, Δ was a nonmonotonic function of variance and differed for r = 0 and r = 1. The ...
Journal of the Acoustical Society of America | 2013
Lynn Gilbertson; An-Chieh Chang; Jacob Stamas; Inseok Heo; Robert A. Lutfi
Informational masking (IM) is the term used to describe masking that appears to have its origin at some central level of the auditory nervous system beyond the cochlea. Supporting a central origin are the two major factors associated with IM: trial-by-trial uncertainty regarding the masker and perceived similarity of target and masker. Here preliminary evidence is provided suggesting these factors exert their influence through a single critical determinant of IM, the stochastic separation of target and masker given by Simpson-Fitters da [Lutfi et al. (2012). J. Acoust. Soc. Am. 132, EL109-113.]. Target and maskers were alternating sequences of tones or words with frequencies, F0s for words, selected at random on each presentation. The listeners task was to discriminate a frequency-difference in the target tones, identify the target words. Performance in both tasks was found to be constant across conditions in which the mean difference (similarity), variance (uncertainty) or covariance (similarity) of ta...
Journal of the Acoustical Society of America | 2013
Inseok Heo; Lynn Gilbertson; An-Chieh Chang; Jacob Stamas; Robert A. Lutfi
Dichotic masking studies using noise are commonly referenced in regard to their implications for “cocktail party listening” wherein target and maskers are speech. In the present study masker decision weights (MDWs) are reported suggesting that speech and noise are processed differently in dichotic masking. The stimuli were words or Gaussian-noise bursts played in sequence as masker-target-masker triads. The apparent location of words (noise bursts) from left to right was varied independently and at random on each presentation using KEMAR HTRFs. In the two-interval, forced-choice procedure listeners were instructed to identify whether the second-interval target was to the left or right of the first. For wide spatial separations between target and masker noise-MDWs were typically negative, indicating that target location was judged relative to the masker. For small spatial separations between target and masker noise-MDWs were typically positive, suggesting that target location was more often confused with t...