Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Erica B. Stevens is active.

Publication


Featured researches published by Erica B. Stevens.


NeuroImage | 2009

Neural Signatures of Phonetic Learning in Adulthood: A Magnetoencephalography Study

Yang Zhang; Patricia K. Kuhl; Toshiaki Imada; Paul Iverson; John S. Pruitt; Erica B. Stevens; Masaki Kawakatsu; Yoh'ichi Tohkura; Iku Nemoto

The present study used magnetoencephalography (MEG) to examine perceptual learning of American English /r/ and /l/ categories by Japanese adults who had limited English exposure. A training software program was developed based on the principles of infant phonetic learning, featuring systematic acoustic exaggeration, multi-talker variability, visible articulation, and adaptive listening. The program was designed to help Japanese listeners utilize an acoustic dimension relevant for phonemic categorization of /r-l/ in English. Although training did not produce native-like phonetic boundary along the /r-l/ synthetic continuum in the second language learners, success was seen in highly significant identification improvement over twelve training sessions and transfer of learning to novel stimuli. Consistent with behavioral results, pre-post MEG measures showed not only enhanced neural sensitivity to the /r-l/ distinction in the left-hemisphere mismatch field (MMF) response but also bilateral decreases in equivalent current dipole (ECD) cluster and duration measures for stimulus coding in the inferior parietal region. The learning-induced increases in neural sensitivity and efficiency were also found in distributed source analysis using Minimum Current Estimates (MCE). Furthermore, the pre-post changes exhibited significant brain-behavior correlations between speech discrimination scores and MMF amplitudes as well as between the behavioral scores and ECD measures of neural efficiency. Together, the data provide corroborating evidence that substantial neural plasticity for second-language learning in adulthood can be induced with adaptive and enriched linguistic exposure. Like the MMF, the ECD cluster and duration measures are sensitive neural markers of phonetic learning.


Attention Perception & Psychophysics | 1994

Talker continuity and the use of rate information during phonetic perception

Kerry P. Green; Erica B. Stevens; Patricia K. Kuhl

Research has shown that speaking rate provides an important context for the perception of certain acoustic properties of speech. For example, syllable duration, which varies as a function of speaking rate, has been shown to influence the perception of voice onset time (VOT) for syllableinitial stop consonants. The purpose of the present experiments was to examine the influence of syllable duration when the initial portion of the syllable was produced by one talker and the remainder of the syllable was produced by a different talker. A short-duration and a long-duration /bi/-/pi/ continuum were synthesized with pitch and formant values appropriate to a female talker. When presented to listeners for identification, these stimuli demonstrated the typical effect of syllable duration on the voicing boundary: a shorter VOT boundary for the short stimuli than for the long stimuli. An /i/ vowel, synthesized with pitch and formant values appropriate to a male talker, was added to the end of each of the short tokens, producing a new hybrid continuum. Although the overall syllable duration of the hybrid stimuli equaled the original long stimuli, they produced a VOT boundary similar to that for the short stimuli. In a second experiment, two new /i/ vowels were synthesized. One had a pitch appropriate to a female talker with formant values appropriate to a male talker; the other had a pitch appropriate to a male talker and formants appropriate to a female talker. These vowels were used to create two new hybrid continua. In a third experiment, new hybrid continua were created by using more extreme male formant values. The results of both experiments demonstrated that the hybrid tokens with a change in pitch acted like the short stimuli, whereas the tokens with a change in formants acted like the long stimuli. A fourth experiment demonstrated that listeners could hear a change in talker with both sets of hybrid tokens. These results indicate that continuity of pitch but not formant structure appears to be the critical factor in the calculation of speaking rate within a syllable.


Journal of the Acoustical Society of America | 1995

A comparison between cerebral‐palsied and normal adults in the perception of auditory‐visual illusions

Nithya Siva; Erica B. Stevens; Patricia K. Kuhl; Andrew N. Meltzoff

Listeners obtain information about speech both from listening to a talker and by using visual cues from the talker’s face. As demonstrated in the McGurk effect, conflicting auditory and visual cues produce illusions. The present experiment investigated whether lack of experience with normal speech production affects the perception of auditory‐visual illusions. Adults with cerebral palsy (CP) who have been severely dysarthric since birth were compared to normally speaking adults on two types of illusions: (1) auditory /aba/ paired with visual /aga/ which typically produces a /da/ illusion; and (2) auditory /aga/ paired with visual /aba/ which typically produces a /bga/ illusion. The number of illusory responses was compared for each group. There was no difference between groups in the number of /da/ illusions. However, adults with CP perceived fewer /bga/ illusions than normal adults. These results suggest that lack of experience articulating speech inhibits a listener’s ability to perceive unusual English...


Journal of the Acoustical Society of America | 1997

Effects of language experience on speech perception: American and Japanese infants’ perception of /ra/ and /la/

Patricia K. Kuhl; Shigeru Kiritani; Toshisada Deguchi; Akiko Hayashi; Erica B. Stevens; Charmaine D. Dugger; Paul Iverson

Listening to language during the first year of life has a dramatic effect on infants’ perception of speech. With increasing exposure to a particular language, infants begin to ignore phonetic variations that are irrelevant in their native language. To examine these effects, 72 American and Japanese infants were tested at two ages, 6–8 months and 10–12 months, with synthetic versions of the American English /r/ and /l/ consonants. The /r–l/ contrast is not phonemic in Japanese. In both countries, the same experimenters, technique (head‐turn conditioning), and stimuli were used. The results revealed two significant effects. The first shows the impact of language experience on speech perception. At 6–8 months of age, American and Japanese infants did not differ. Both groups performed significantly above chance (American M=63.7%; Japanese M=64.7%). By 10–12 months of age, American infants demonstrated significant improvement relative to performance at 6–8 months (M=73.8%), while Japanese infants declined (M=5...


Journal of the Acoustical Society of America | 1995

Investigating the role of specific facial information in audio‐visual speech perception

Paula M. T. Smeele; Lisa D. Hahnlen; Erica B. Stevens; Patricia K. Kuhl; Andrew N. Meltzoff

When hearing and seeing a person speak, people receive both auditory and visual speech information. The contribution made by visual speech information has been demonstrated in a wide variety of conditions, most clearly when conflicting auditory and visual information is presented. In this study an investigation was performed to determine which aspects of the face most strongly influence audio‐visual speech perception. The visual stimulus was manipulated using special effects techniques to isolate three specific ‘‘articulatory parts:’’ lips only, oral cavity only, or jaw only. These ‘‘parts’’ and their combinations were dubbed with auditory tokens to create ‘‘fusion’’ stimuli (A/aba/ + V/aga/) and ‘‘combination’’ stimuli (A/aga/ + V/aba/). Results indicated that visual information from jaw‐only movements was not sufficient to induce illusory effects. However, for the combination condition, seeing moving lips or the inside of the speaker’s mouth produced substantial audio‐visual effects. Additional visual i...


Journal of the Acoustical Society of America | 2000

Perceptual identification training of American English /r/ and /l/ by Japanese speakers generalizes to novel stimuli and tasks

Tobey L. Doeleman; Ryan J. Conley; John S. Pruitt; Paul Iverson; Patricia K. Kuhl; Erica B. Stevens

This study investigated the extent to which results from /r/ and /l/ perceptual identification training generalized to identification of novel natural stimuli and discrimination of synthetic stimuli in ten native Japanese speakers. A behavioral training software program was used which incorporated factors known to affect the acquisition of phonemic distinctions, including bimodal speech cues, stimulus variability, and subject‐controlled stimulus presentation with immediate feedback. Natural speech tokens were digitally manipulated to create three levels of acoustic exaggeration. These levels, and other stimulus characteristics such as the number of talkers, vowel contexts, and syllable structure, varied during each training session as a function of listener’s performance. Pre‐ and post‐training identification tasks measured generalization to natural tokens of novel talkers and vowel contexts. In addition, pre‐ and post‐training discrimination tasks measured generalization to novel tasks and synthetic stim...


Journal of the Acoustical Society of America | 1990

Exploring the basis of the “McGurk effect”: Can perceivers combine information from a female face and a male voice?

Kerry P. Green; Erica B. Stevens; Patricia K. Kuhl; Andrew M. Meltzoff

In the “McGurk” effect, observers typically report the illusory syllable /da/ when they hear the auditory syllable /ba/ presented in synchrony with a video display of a talker saying /ga/. In such experiments, there is usually congruence between the two modalities in that the same talker produces both the auditory and the visual signals. In the experiments reported here, the effect of reducing the congruence between the two modalities on the magnitude of the McGurk effect was examined. This was accomplished by dubbing a male talkers voice onto a video tape containing a female talkers face, and a female talkers voice onto a video tape containing a male talkers voice. These “cross‐dubbed” video tapes were compared to normal video tapes in which the male talkers voice was dubbed onto a male talkers face, and the female talkers voice was dubbed onto a female talkers face. The results show that even though there was clear incompatibility in the talker characteristics between the auditory and visual sig...


Journal of the Acoustical Society of America | 1989

The use of rate information during phonetic perception depends on pitch continuity

Kerry P. Green; Erica B. Stevens; Patricia K. Kuhl

Studies have demonstrated that speaking rate provides an important context for the perception of certain acoustic properties of speech. For example, syllable duration, which varies as a function of speaking rate, has been shown to influence the perception of voice‐onset‐time (VOT) for syllable‐initial stop consonants. The purpose of the present experiments was to examine the influence of syllable duration when the initial portion of the syllable was produced by one talker, and the remainder of the syllable was produced by a different talker. A short duration and a long duration /bi‐pi/ continuum was synthesized with pitch and formant values appropriate to a female talker. When presented to listeners for identification, these stimuli demonstrated the typical effect of syllable duration on the voicing boundary: a shorter VOT boundary for the short stimuli relative to the long. An /i/ vowel, synthesized with pitch and formant values appropriate to a male talker, was added to the end of each of the short toke...


Journal of the Acoustical Society of America | 2001

Effects of short‐term exposure to a foreign language on discrimination of a non‐native phonetic contrast: Convergent evidence from brain and behavioral tests

Patricia K. Kuhl; Feng Ming Tsao; Huei Mei Liu; Sharon Corina; Erica B. Stevens; Tobey Nelson; Jessica Pruitt; Denise Padden

Studies in our laboratory demonstrate that between 6 and 12 months of age infants show a significant increase in the ability to discriminate native‐language phonetic contrasts and a decline in foreign‐language discrimination. The increase in performance on native‐language contrasts suggests a process of active learning rather than maintenance. In the present experiment, we tested whether the learning process infants engage in during the period between 6 and 12 months extends to a foreign language the infants had not previously heard. American infants at 9 months of age participated in a 12‐session language play group in which they heard a native speaker of Mandarin Chinese read, play, and talk to them. A control group was exposed to American English using the same books and toys. Both groups were subsequently tested on a Mandarin Chinese contrast using both behavioral and brain measures. The results demonstrate that infants exposed to Mandarin over a 4‐week period show significant discrimination of the Ma...


Journal of the Acoustical Society of America | 2000

Investigating a computer‐based method to measure school‐age children’s ability to discriminate non‐native speech contrasts

Jesica C. Pruitt; John S. Pruitt; Patricia K. Kuhl; Erica B. Stevens

Most studies of non‐native speech perception have examined adult subject populations, as tests on infants, toddlers, and older children can be methodologically difficult to conduct. Direct application of methods used for adults in the assessment of children can be problematic. Such methods can be long, mentally fatiguing, and cognitively complex. In addition to task variables, stimuli which are difficult to perceive, such as non‐native speech sounds, make such tests on children all the more difficult. The present study examines a new computer‐based testing method specifically designed to test the discrimination of non‐native speech sounds with 5‐ to 15‐year‐old children. The design uses an oddity task that presents non‐native phoneme contrasts in a way that is engaging and motivating. For baseline comparison, discrimination data were obtained from 10 AE and 10 Hindi speaking adults using four phonetically different, natural CV Hindi speech contrasts and compared to identification data previously obtained from AE and Hindi adults using the same stimuli (Pruitt, 1995) implemented with a standard adult‐testing protocol. Results from pilot testing with children and adults using this new computer‐based method will be discussed and compared to the baseline identification and discrimination data. [Work supported by NICHD HD37954 to P. K. Kuhl.]

Collaboration


Dive into the Erica B. Stevens's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Paul Iverson

University College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Denise Padden

University of Washington

View shared research outputs
Top Co-Authors

Avatar

Akiko Hayashi

Tokyo Gakugei University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Toshiaki Imada

University of Washington

View shared research outputs
Researchain Logo
Decentralizing Knowledge