Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Rachel M. Miller is active.

Publication


Featured researches published by Rachel M. Miller.


Psychological Science | 2007

Lip-Read Me Now, Hear Me Better Later Cross-Modal Transfer of Talker-Familiarity Effects

Lawrence D. Rosenblum; Rachel M. Miller; Kauyumari Sanchez

There is evidence that for both auditory and visual speech perception, familiarity with the talker facilitates speech recognition. Explanations of these effects have concentrated on the retention of talker information specific to each of these modalities. It could be, however, that some amodal, talker-specific articulatory-style information facilitates speech perception in both modalities. If this is true, then experience with a talker in one modality should facilitate perception of speech from that talker in the other modality. In a test of this prediction, subjects were given about 1 hr of experience lipreading a talker and were then asked to recover speech in noise from either this same talker or a different talker. Results revealed that subjects who lip-read and heard speech from the same talker performed better on the speech-in-noise task than did subjects who lip-read from one talker and then heard speech from a different talker.


Attention Perception & Psychophysics | 2010

Alignment to visual speech information.

Rachel M. Miller; Kauyumari Sanchez; Lawrence D. Rosenblum

Speech alignment is the tendency for interlocutors to unconsciously imitate one another’s speaking style. Alignment also occurs when a talker is asked to shadow recorded words (e.g., Shockley, Sabadini, & Fowler, 2004). In two experiments, we examined whether alignment could be induced with visual (lipread) speech and with auditory speech. In Experiment 1, we asked subjects to lipread and shadow out loud a model silently uttering words. The results indicate that shadowed utterances sounded more similar to the model’s utterances than did subjects’ nonshadowed read utterances. This suggests that speech alignment can be based on visual speech. In Experiment 2, we tested whether raters could perceive alignment across modalities. Raters were asked to judge the relative similarity between a model’s visual (silent video) utterance and subjects’ audio utterances. The subjects’ shadowed utterances were again judged as more similar to the model’s than were read utterances, suggesting that raters are sensitive to cross-modal similarity between aligned words.


Journal of the Acoustical Society of America | 2006

Phonetic alignment to visual speech

Rachel M. Miller; Lawrence D. Rosenblum; Kauyumari Sanchez

Talkers are known to produce allophonic variation based, in part, on the speech of the person with whom they are talking. This subtle imitation, or phonetic alignment, occurs during live conversation and when a talker is asked to shadow recorded words [e.g., Shockley, et al., Percept. Psychophys. 66, 422 (2004)]. What is yet to be determined is the nature of the information to which talkers align. To examine whether this information is restricted to the acoustic modality, experiments were conducted to test if talkers align to visual speech (lipread) information. Normal‐hearing subjects were asked to watch an actor silently utter words, and to identify these words by saying them out loud as quickly as possible. These shadowed responses were audio recorded and naive raters compared these responses to the actors auditory words (which had been recorded along with the actors visual tokens). Raters judged the shadowed words as sounding more like the actors words than did baseline words, which had been spoken by...


Journal of the Acoustical Society of America | 2015

One of these accents sounds like the other; one of these accents is not the same

Rachel M. Miller

Accented speech occurs when the structure of a talker’s native language causes deviations from the speech productions norms of the non-native language being produced (e.g., Spanish-accented English; Tarone, 1987). The current study tested sensitivity to deviations shared across accents by asking two groups of listeners to judge similarities and differences between Spanish- and Chinese-accented speech samples. The matching group was asked to judge whether a model’s accented token (e.g., Spanish) was more similar in accent to a token in the same (e.g., Spanish) vs. a different accent (e.g., Chinese). The discrimination group was asked to judge whether a model’s accented token (e.g., Spanish) was different in accent from a token in the same (e.g., Spanish) vs. a different accent (e.g., Chinese). Results showed that listeners are able to make similarity matches at significantly greater than chance (50%) levels, suggesting that they are perceptually sensitive to the similarities between non-native accents. How...


Journal of the Acoustical Society of America | 2011

Listen up and perceive speech: Demonstrating the flexibility of speech perception.

Rachel M. Miller

Individuals encounter many different types of speech information on a daily basis. Some of this information comes in forms that are not even auditory in nature (e.g., visual speech). Other types of information contain variations that should make them difficult to understand (e.g., speech‐in‐noise and accented speech). However, individuals are able to perceive all sorts of speech quite easily. The present demonstrations will show the flexibility of speech perception by providing girl scouts with examples of various types of speech. The demonstrations will be (1) a lipreading choice task that presents silent videos of talkers producing words, (2) silent videos of talkers producing sentences, (3) silent, point‐light videos that show how speech can be perceived simply by seeing the movement of talkers‘ faces, (4) sentences produced in three levels of noise, and (5) words produced by native Chinese and Spanish talkers.


Journal of the Acoustical Society of America | 2010

Influence of talker‐specific information on recognition memory for accented speech.

Rachel M. Miller; Kauyumari Sanchez; Lawrence D. Rosenblum; James W. Dias

Talker‐specific (idiolectic) information aids memory for words repeated in the same voice. [Palmeri et al., J. Exp. Psychol. Learn. Mem. Cogn. 19, 2 (1993).] However, there is some evidence that the influence of idiolect on the perception of accented speech occurs differently. The current studies tested the impact of talker and accent change on memory for words using a continuous recognition memory task. Native English listeners were presented with accented words which were later repeated in the same or in a different voice. Subjects were asked to identify if the words were old (previously presented) or new (never presented). Memory performance was measured in terms of reaction time and recognition accuracy. In experiment 1, words were repeated in the same or in a different voice across repetitions, with the voices sharing an accent. Preliminary results suggest that hearing the words in the same voice improves recognition memory for accented words. In experiment 2, differenct voice repetitions represented models from the same or a different accent background. Listeners were asked to identify old words as being produced in the same or a different voice. Initial results indicate that hearing repetitions in the same accent improves recognition memory, but interferes with voice identification.


Journal of the Acoustical Society of America | 2010

Talker‐specific accent: Can speech alignment reveal idiolectic influences during the perception of accented speech?

Rachel M. Miller; Kauyumari Sanchez; Lawrence D. Rosenblum; James W. Dias; Neal Dykmans

Listeners use talker‐specific (idiolectic) information to help them perceive and remember speech [e.g., Goldinger, J. Exp. Psychol. Learn. 22, 1166–1183 (1998)]. However, recent research has shown that idiolectic information is not as helpful when listeners hear accented speech [e.g., Sidaras et al., J. Acoust. Soc. Am. 125, 5 (2009)]. It could be that listeners fail to encode idiolectic information when perceiving accented speech. To examine whether idiolectic is still encoded, experiments tested if subjects would display speech alignment to specific accented models. Speech alignment is the tendency to imitate another talker and can occur when shadowing heard speech [e.g., Goldinger, Psychol. Rev. 105, 251 (1998)]. Native English subjects were asked to shadow a Chinese‐ or Spanish‐accented model producing English words. Raters then judged whether the shadowed tokens were more similar in pronunciation to those of a shadowed model or of a different models with the same accent. In a second experiment, rater...


Journal of the Acoustical Society of America | 2009

Investigating perceptual measures of speech alignment: Do matching tasks make the grade?

Rachel M. Miller; Kauyumari Sanchez; Lawrence D. Rosenblum

Speech alignment is the tendency to produce speech which sounds similar to that of a person with whom one is speaking. It is often measured perceptually using an AXB‐matching task [e.g., S. D. Goldinger, Psychol. Rev. 105, 251 (1998)]. In this procedure, naive raters determine which of two utterances of a talker, produced prior (baseline) and after interaction with a model (e.g., shadowed), sounds more similar to that model. However, utilizing this type of perceptual measure has come under scrutiny due to its reliance on baseline utterances produced by talkers reading text. It has been suggested that AXB matching results reveal more about inherent differences between read and produced (shadowed) speech, than actual alignment. The present research addressed this concern. Experiment 1 involved a modified AXB task for which raters judged whether a subject sounded more like the model they had shadowed or another model. In Experiment 2, raters judged whether a subject sounded more similar to the model they had shadowed or to another subject who shadowed a different model. In both cases, results revealed that subjects sounded more similar to the model they shadowed. This suggests that alignment is perceivable by raters even when removing the comparison to read text. [Work supported by NIDCD Grant R01DC008957.]


Journal of the Acoustical Society of America | 2009

Effects of audio‐visual speech information on recognition memory of spoken words.

Kauyumari Sanchez; Rachel M. Miller; Lawrence D. Rosenblum

Audio‐visual speech has generally been found to contain more usable information than audio‐only speech. However, there is conflicting evidence of whether seeing the face of a speaker facilitates memory for spoken words [e.g., Sheffert et al., Cog. Tech. 8, 42–50 (2003)]. To address this issue, three experiments examined whether an audio‐visual benefit would be observed on a word recognition task. Experiment 1 compared recognition of spoken words both presented and tested in audio‐visual versus audio‐only forms. Audio‐visual word stimuli were recognized significantly better than audio‐only words. Experiment 2 tested whether this benefit was due to the presence of visible articulatory information or simply more information in general. Recognition of words presented (and tested) in audio‐only, audio‐visual, and audio with accompanying static face image conditions were compared. Words presented in audio‐visual (dynamic face) form were recognized better than audio‐only and audio‐static face stimuli. To test wh...


Journal of the Acoustical Society of America | 2008

Increasing speech alignment through crossmodal speaker familiarity

Rachel M. Miller; Kauyumari Sanchez; Lawrence D. Rosenblum

In speech alignment phenomena, individuals inadvertently imitate aspects of another talkers utterances. Recent research has shown that when asked to shadow words, subjects not only align to the speech they hear, they also align to the speech they see when shadowing words by lipreading [Miller, et al., J. Acoust. Soc. Am., 120, 5, Pt. 2 (2006)]. This research also showed that some of the dimensions to which subjects align are the same whether based on shadowing of auditory or visual speech stimuli. This might mean that subjects can align to a speakers idiolectic dimensions available in both modalities. To examine this possibility, an experiment was conducted to see if alignment increased with exposure to the same or a different speaker, across two blocks of presentations that were: a) both auditory; b) both visual; or c) one auditory and one visual. If subjects align to amodal, idiolectic speaker style, then alignment should be comparable across presentation types in the same speaker condition. Results r...

Collaboration


Dive into the Rachel M. Miller's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

James W. Dias

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge