Ashley Assgari
University of Louisville
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Ashley Assgari.
Journal of the Acoustical Society of America | 2017
Christian E. Stilp; Ashley Assgari
When spectral properties differ across successive sounds, this difference is perceptually magnified, resulting in spectral contrast effects (SCEs). Recently, Stilp, Anderson, and Winn [(2015) J. Acoust. Soc. Am. 137(6), 3466-3476] revealed that SCEs are graded: more prominent spectral peaks in preceding sounds produced larger SCEs (i.e., category boundary shifts) in categorization of subsequent vowels. Here, a similar relationship between spectral context and SCEs was replicated in categorization of voiced stop consonants. By generalizing this relationship across consonants and vowels, different spectral cues, and different frequency regions, acute and graded sensitivity to spectral context appears to be pervasive in speech perception.
Attention Perception & Psychophysics | 2018
Christian E. Stilp; Ashley Assgari
Speech perception is heavily influenced by surrounding sounds. When spectral properties differ between earlier (context) and later (target) sounds, this can produce spectral contrast effects (SCEs) that bias perception of later sounds. For example, when context sounds have more energy in low-F1 frequency regions, listeners report more high-F1 responses to a target vowel, and vice versa. SCEs have been reported using various approaches for a wide range of stimuli, but most often, large spectral peaks were added to the context to bias speech categorization. This obscures the lower limit of perceptual sensitivity to spectral properties of earlier sounds, i.e., when SCEs begin to bias speech categorization. Listeners categorized vowels (/ɪ/-/ɛ/, Experiment 1) or consonants (/d/-/g/, Experiment 2) following a context sentence with little spectral amplification (+1 to +4 dB) in frequency regions known to produce SCEs. In both experiments, +3 and +4 dB amplification in key frequency regions of the context produced SCEs, but lesser amplification was insufficient to bias performance. This establishes a lower limit of perceptual sensitivity where spectral differences across sounds can bias subsequent speech categorization. These results are consistent with proposed adaptation-based mechanisms that potentially underlie SCEs in auditory perception.Significance statementRecent sounds can change what speech sounds we hear later. This can occur when the average frequency composition of earlier sounds differs from that of later sounds, biasing how they are perceived. These “spectral contrast effects” are widely observed when sounds’ frequency compositions differ substantially. We reveal the lower limit of these effects, as +3 dB amplification of key frequency regions in earlier sounds was enough to bias categorization of the following vowel or consonant sound. Speech categorization being biased by very small spectral differences across sounds suggests that spectral contrast effects occur frequently in everyday speech perception.
Journal of the Acoustical Society of America | 2016
Ashley Assgari; Asim Mohiuddin; Rachel M. Theodore; Christian E. Stilp
Speech categorization is influenced by spectral contrast effects (SCEs), the perceptual magnification of spectral differences between successive sounds. Through SCEs, preceding acoustic contexts can bias categorization of following sounds away from reliable spectral properties. Recent findings (Assgari & Stilp, 2015 JASA) show that SCEs in vowel categorization can be modulated by talker characteristics: a clear SCE when the preceding context was 200 sentences spoken by a single talker was diminished when the context featured 200 talkers. This result was attributed to variability in mean pitch of the preceding sentences. However, neither mean sentence pitch nor talker gender was explicitly controlled, which challenges identification of the locus of the talker effect. The current study examined whether gender and pitch variability yield separable contributions to SCEs. Listeners heard precursor sentences then categorized a target vowel from an /ɪ/ to /ɛ/ continuum. Sentences were processed to add a modest l...
Journal of the Acoustical Society of America | 2015
Christian E. Stilp; Paul W. Anderson; Ashley Assgari; Gregory M. Ellis; Pavel Zahorik
Preceding acoustic context influences speech perception, especially when it features a reliable spectral property (i.e., relatively stable or recurring across time). When preceding sounds have a spectral peak matching F2 of the following target vowel, listeners decrease reliance on F2 and increase reliance on changing, more informative cues for vowel identification. This process, known as auditory perceptual calibration, has only been studied in anechoic conditions. Room reverberation smears spectral information across time, presumably giving listeners additional “looks” at reliable spectral peaks, which should increase the degree of perceptual calibration. Listeners identified vowels (varying from [i] to [u] in F2 and spectral tilt) presented in isolation, then following a sentence filtered to share a spectral peak with the target vowel’s F2. Calibration was measured as the change in perceptual weights (standardized logistic regression coefficients) across blocks. Listeners completed sessions where stimu...
Journal of the Acoustical Society of America | 2018
Christian E. Stilp; Ashley Assgari
Speech perception is heavily influenced by surrounding sounds. When spectral properties differ between earlier (context) and later (target) sounds, this can produce spectral contrast effects (SCEs) that bias categorization of later sounds. For example, when context sounds have a low-F3 bias, listeners report more high-F3 responses to the target consonant (/d/); conversely, a high-F3 bias in context sounds produces more low-F3 responses (/g/). SCEs have been demonstrated using a variety of approaches, but most often, the context was a single sentence filtered two ways (e.g., low-F3 bias, high-F3 bias) to introduce long-term spectral properties that biased speech categorization. Here, consonant categorization (/d/-/g/) was examined following context sentences that naturally possessed desired long-term spectral properties without any filtering. Filtered sentences with equivalent spectral peaks were included as controls. For filtered context sentences, as average spectral peak magnitudes (i.e., filter gain) i...
Journal of the Acoustical Society of America | 2018
Ashley Assgari; Christian E. Stilp
Perception of a given sound is influenced by spectral properties of surrounding sounds. For example, listeners perceive /ɪ/ (low F1) more often when following sentences filtered to emphasize high-F1 frequencies, and perceive /ɛ/ (high F1) more often following sentences filtered to emphasize low-F1 frequencies. These biases in vowel categorization are known as spectral contrast effects (SCEs). When preceding sentences were spoken by acoustically similar talkers (low variability in mean f0), SCEs biased vowel categorization, but sentences spoken by acoustically different talkers (high variability in mean f0) biased vowel categorization significantly less (Assgari et al., 2016 ASA). However, it was unclear whether these effects varied due to local (trial-to-trial) or global (across entire block) variability in mean f0. Here, the same sentences were arranged to increase/decrease monotonically in mean f0 across trials (low local variability) or vary substantially from trial-to-trial (high local variability) wi...
Journal of the Acoustical Society of America | 2018
Ashley Assgari; Jonathan Frazier; Christian E. Stilp
Auditory perception is shaped by spectral properties of surrounding sounds. For example, when spectral properties differ between earlier (context) and later (target) sounds, this can produce spectral contrast effects (SCEs; i.e., categorization boundary shifts) that bias perception of later sounds. SCEs influence perception of speech and nonspeech sounds alike. When categorizing vowels or consonants, SCE magnitudes increased linearly with greater spectral differences between contexts and target speech sounds [Stilp et al. (2015) JASA; Stilp & Alexander (2016) POMA; Stilp & Assgari (2017) JASA]. Here, we tested whether this linear relationship between context spectra and SCEs generalizes to nonspeech categorization. Listeners categorized musical instrument targets that varied from French horn to tenor saxophone. Before each target, listeners heard a one-second music sample processed by spectral envelope difference filters that amplified / attenuated frequencies to reflect the difference between horn and saxophone spectra. By varying filter gain, filters reflected part of (25%, 50%, 75%) or the full (100%) difference between instrument spectra. As filter gains increased to reflect more of the difference between instrument spectra, SCE magnitudes increased linearly, parallel to speech categorization. Thus, a close relationship between context spectra and biases in target categorization appears to be fundamental to auditory perception.Auditory perception is shaped by spectral properties of surrounding sounds. For example, when spectral properties differ between earlier (context) and later (target) sounds, this can produce spectral contrast effects (SCEs; i.e., categorization boundary shifts) that bias perception of later sounds. SCEs influence perception of speech and nonspeech sounds alike. When categorizing vowels or consonants, SCE magnitudes increased linearly with greater spectral differences between contexts and target speech sounds [Stilp et al. (2015) JASA; Stilp & Alexander (2016) POMA; Stilp & Assgari (2017) JASA]. Here, we tested whether this linear relationship between context spectra and SCEs generalizes to nonspeech categorization. Listeners categorized musical instrument targets that varied from French horn to tenor saxophone. Before each target, listeners heard a one-second music sample processed by spectral envelope difference filters that amplified / attenuated frequencies to reflect the difference between horn and sa...
Journal of the Acoustical Society of America | 2017
Christian E. Stilp; Ashley Assgari
Speech perception is heavily influenced by surrounding sounds. When spectral properties differ between earlier (context) and later (target) sounds, this can produce spectral contrast effects (SCEs) that bias categorization of later sounds. For example, when context sounds have a low F1 bias, listeners report more high F1 responses to the target vowel, and vice versa. SCEs have been demonstrated using a variety of approaches, but most often the context was a single sentence filtered two ways (e.g., low F1 bias, high F1 bias) to introduce spectral properties that biased speech categorization. This maximizes acoustic control over stimulus materials, but vastly understates the acoustic variability of speech. Here, vowel categorization was examined following context sentences that naturally possessed desired spectral properties without any filtering. Sentences with inherent low-F1 or high-F1 peaks in their long-term spectra were presented before a target vowel (/ɪ/-/ɛ/). Filtered sentences with equivalent spec...
Journal of the Acoustical Society of America | 2017
Ashley Assgari; Rachel M. Theodore; Christian E. Stilp
Spectral contrast effects (SCEs) occur when spectral differences between earlier sounds (a precursor sentence) and a target sound are perceptually magnified. For example, listeners label a vowel as /ɪ/ (low F1) more often when following a high-F1 precursor sentence, and report /ɛ/ (high F1) more often following a low-F1 precursor sentence. Recently, these context effects were shown to be modulated by the acoustic variability across precursor sentences. When sentence mean f0 was highly variable from trial to trial, SCEs in categorization of /ɪ/ and /ɛ/ were significantly smaller than when mean f0 was more consistent across sentences (Assgari, Theodore, & Stilp, 2016 ASA). But, is acoustic variability in f0 the best predictor of context effects that influence categorization of vowels differing in F1? Here we tested whether F1 variability in precursor sentences has a comparable effect on vowel categorization to f0 variability. Listeners heard precursor sentences filtered to add a low-F1 or high-F1 spectral p...
Journal of the Acoustical Society of America | 2015
Ashley Assgari; Christian E. Stilp
Talker normalization (TN) occurs when listeners adjust to a talker’s voice, resulting in faster and/or more accurate speech recognition. Several have suggested that TN contributes to spectral contrast, the perceptual magnification of changing acoustic properties. Studies using sine tones in place of speech demonstrated that talker information is not necessary to produce spectral contrast effects (Laing et al., 2012 Front. Psychol.). However, sine tones lack acoustic complexity and ecological validity, failing to address whether TN influences spectral contrast in speech. Here we investigated how talker and acoustic variability influence contrast effects. Listeners heard sentences from a single talker (1 sentence), HINT (1 talker, 200 sentences) or TIMIT databases (200 talkers, 200 sentences) followed by the target vowel (varying from “ih” to “eh” in F1). Low (100–400 Hz) or high (550–850 Hz) frequency regions were amplified ( + 20, + 5 dB) to encourage “eh” or “ih” responses, respectively. When sentences c...