Kauyumari Sanchez
University of California, Riverside
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Kauyumari Sanchez.
Psychological Science | 2007
Lawrence D. Rosenblum; Rachel M. Miller; Kauyumari Sanchez
There is evidence that for both auditory and visual speech perception, familiarity with the talker facilitates speech recognition. Explanations of these effects have concentrated on the retention of talker information specific to each of these modalities. It could be, however, that some amodal, talker-specific articulatory-style information facilitates speech perception in both modalities. If this is true, then experience with a talker in one modality should facilitate perception of speech from that talker in the other modality. In a test of this prediction, subjects were given about 1 hr of experience lipreading a talker and were then asked to recover speech in noise from either this same talker or a different talker. Results revealed that subjects who lip-read and heard speech from the same talker performed better on the speech-in-noise task than did subjects who lip-read from one talker and then heard speech from a different talker.
Attention Perception & Psychophysics | 2010
Rachel M. Miller; Kauyumari Sanchez; Lawrence D. Rosenblum
Speech alignment is the tendency for interlocutors to unconsciously imitate one another’s speaking style. Alignment also occurs when a talker is asked to shadow recorded words (e.g., Shockley, Sabadini, & Fowler, 2004). In two experiments, we examined whether alignment could be induced with visual (lipread) speech and with auditory speech. In Experiment 1, we asked subjects to lipread and shadow out loud a model silently uttering words. The results indicate that shadowed utterances sounded more similar to the model’s utterances than did subjects’ nonshadowed read utterances. This suggests that speech alignment can be based on visual speech. In Experiment 2, we tested whether raters could perceive alignment across modalities. Raters were asked to judge the relative similarity between a model’s visual (silent video) utterance and subjects’ audio utterances. The subjects’ shadowed utterances were again judged as more similar to the model’s than were read utterances, suggesting that raters are sensitive to cross-modal similarity between aligned words.
Attention Perception & Psychophysics | 2013
Rachel M. Miller; Kauyumari Sanchez; Lawrence D. Rosenblum
Speech alignment, or the tendency of individuals to subtly imitate each other’s speaking styles, is often assessed by comparing a subject’s baseline and shadowed utterances to a model’s utterances, often through perceptual ratings. These types of comparisons provide information about the occurrence of a change in subject’s speech, but they do not indicate that this change is toward the specific shadowed model. In three experiments, we investigated whether alignment is specific to a shadowed model. Experiment 1 involved the classic baseline-to-shadowed comparison, to confirm that subjects did, in fact, sound more like their model when they shadowed, relative to any preexisting similarities between a subject and a model. Experiment 2 tested whether subjects’ utterances sounded more similar to the model whom they had shadowed or to another, unshadowed model. In Experiment 3, we examined whether subjects’ utterances sounded more similar to the model whom they had shadowed or to another subject who had shadowed a different model. The results of all experiments revealed that subjects sounded more similar to the model whom they had shadowed. This suggests that shadowing-based speech alignment is not just a change, but a change in the direction of the shadowed model, specifically.
Attention Perception & Psychophysics | 2013
Kauyumari Sanchez; James W. Dias; Lawrence D. Rosenblum
Rosenblum, Miller, and Sanchez (Psychological Science, 18, 392-396, 2007) found that subjects first trained to lip-read a particular talker were then better able to perceive the auditory speech of that same talker, as compared with that of a novel talker. This suggests that the talker experience a perceiver gains in one sensory modality can be transferred to another modality to make that speech easier to perceive. An experiment was conducted to examine whether this cross-sensory transfer of talker experience could occur (1) from auditory to lip-read speech, (2) with subjects not screened for adequate lipreading skill, (3) when both a familiar and an unfamiliar talker are presented during lipreading, and (4) for both old (presentation set) and new words. Subjects were first asked to identify a set of words from a talker. They were then asked to perform a lipreading task from two faces, one of which was of the same talker they heard in the first phase of the experiment. Results revealed that subjects who lip-read from the same talker they had heard performed better than those who lip-read a different talker, regardless of whether the words were old or new. These results add further evidence that learning of amodal talker information can facilitate speech perception across modalities and also suggest that this information is not restricted to previously heard words.
Journal of the Acoustical Society of America | 2006
Rachel M. Miller; Lawrence D. Rosenblum; Kauyumari Sanchez
Talkers are known to produce allophonic variation based, in part, on the speech of the person with whom they are talking. This subtle imitation, or phonetic alignment, occurs during live conversation and when a talker is asked to shadow recorded words [e.g., Shockley, et al., Percept. Psychophys. 66, 422 (2004)]. What is yet to be determined is the nature of the information to which talkers align. To examine whether this information is restricted to the acoustic modality, experiments were conducted to test if talkers align to visual speech (lipread) information. Normal‐hearing subjects were asked to watch an actor silently utter words, and to identify these words by saying them out loud as quickly as possible. These shadowed responses were audio recorded and naive raters compared these responses to the actors auditory words (which had been recorded along with the actors visual tokens). Raters judged the shadowed words as sounding more like the actors words than did baseline words, which had been spoken by...
Linguistics | 2018
Abby Walker; Jennifer Hay; Katie Drager; Kauyumari Sanchez
Abstract This paper presents results from an experiment designed to test whether New Zealand listeners’ perceptual adaptation towards Australian English is mediated by their attitudes toward Australia, which we attempted to manipulate experimentally. Participants were put into one of three conditions, where they either read good facts about Australia, bad facts about Australia, or no facts about Australia (the control). Participants performed the same listening task – matching the vowel in a sentence to a vowel in a synthesized continuum – before and after reading the facts. The results indicate that participants who read the bad facts shifted their perception of kit to more Australian-like tokens relative to the control group, while the participants who read good facts shifted their perception of kit to more NZ-like tokens relative to the control group. This result shows that perceptual adaptation towards a dialect can occur in the absence of a speaker of that dialect and that these adaptations are subject to a listener’s (manipulated) affect towards the primed dialect region.
Journal of the Acoustical Society of America | 2017
Kauyumari Sanchez; Lorin Lachs; Alexis Carlon; Patrick LaShell
Although people can match visual-only speech to auditory-only speech (and vice-versa) when the sources match under a variety of conditions (e.g. Lachs & Pisoni, 2004; Sanchez, Dias, & Rosenblum, 2013), it is unclear to what extent this ability is mediated by abstract, cognitive processes, representations, and linguistic experience (Vitevitch & Luce, 1999; Sanchez-Garcia, Enns, & Soto-Faraco, 2013). To address this question, the current experiment replicates and extends Vitevitch & Luce (1999) by including audio and visual stimuli presentations of bisyallabic words and non-words that vary on phonemic predictability. In the cross-modal AB matching task, participants were presented with either visual-only speech (e.g. a talker’s speaking face) or auditory speech (a talker’s voice) in the A position. Stimuli in the B position consisted of the opposing sensory modality, counterbalanced. A significant Phonetic Probability X Presentation Order interaction was found. People perform similarly well on high and low ...
Journal of the Acoustical Society of America | 2010
Rachel M. Miller; Kauyumari Sanchez; Lawrence D. Rosenblum; James W. Dias
Talker‐specific (idiolectic) information aids memory for words repeated in the same voice. [Palmeri et al., J. Exp. Psychol. Learn. Mem. Cogn. 19, 2 (1993).] However, there is some evidence that the influence of idiolect on the perception of accented speech occurs differently. The current studies tested the impact of talker and accent change on memory for words using a continuous recognition memory task. Native English listeners were presented with accented words which were later repeated in the same or in a different voice. Subjects were asked to identify if the words were old (previously presented) or new (never presented). Memory performance was measured in terms of reaction time and recognition accuracy. In experiment 1, words were repeated in the same or in a different voice across repetitions, with the voices sharing an accent. Preliminary results suggest that hearing the words in the same voice improves recognition memory for accented words. In experiment 2, differenct voice repetitions represented models from the same or a different accent background. Listeners were asked to identify old words as being produced in the same or a different voice. Initial results indicate that hearing repetitions in the same accent improves recognition memory, but interferes with voice identification.
Journal of the Acoustical Society of America | 2010
Rachel M. Miller; Kauyumari Sanchez; Lawrence D. Rosenblum; James W. Dias; Neal Dykmans
Listeners use talker‐specific (idiolectic) information to help them perceive and remember speech [e.g., Goldinger, J. Exp. Psychol. Learn. 22, 1166–1183 (1998)]. However, recent research has shown that idiolectic information is not as helpful when listeners hear accented speech [e.g., Sidaras et al., J. Acoust. Soc. Am. 125, 5 (2009)]. It could be that listeners fail to encode idiolectic information when perceiving accented speech. To examine whether idiolectic is still encoded, experiments tested if subjects would display speech alignment to specific accented models. Speech alignment is the tendency to imitate another talker and can occur when shadowing heard speech [e.g., Goldinger, Psychol. Rev. 105, 251 (1998)]. Native English subjects were asked to shadow a Chinese‐ or Spanish‐accented model producing English words. Raters then judged whether the shadowed tokens were more similar in pronunciation to those of a shadowed model or of a different models with the same accent. In a second experiment, rater...
Journal of the Acoustical Society of America | 2010
Kauyumari Sanchez; Lawrence D. Rosenblum
Familiarity with lip‐read (visual‐only) speech from a talker facilitates perception of heard (audio‐only) speech‐in‐noise from that same talker [L. Rosenblum et al., Psychol. Sci. 18, 392 (2007)]. This could suggest that stored speech episodes are composed of lexical items that retain amodal talker‐specific characteristics. However, the Rosenblum et al. study used sentential material which differed between the lip‐reading and speech‐in‐noise tasks. That study did not examine whether cross‐modal talker facilitation was based on lexical items used in both tasks. The current experiment addressed whether amodal talker‐specific characteristics are stored as lexical episodes by modifying Rosenblum et al.’s design. Subjects were first asked to lip‐read words from a talker and were then asked to listen to words embedded in noise spoken by the same or a different talker. Preliminary evidence showed that speech‐in‐noise words were identified better when spoken by the same talker from the lip‐reading task. In additi...