Angela Cooper
Northwestern University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Angela Cooper.
Journal of the Acoustical Society of America | 2012
Angela Cooper; Yue Wang
Adult non-native speech perception is subject to influence from multiple factors, including linguistic and extralinguistic experience such as musical training. The present research examines how linguistic and musical factors influence non-native word identification and lexical tone perception. Groups of native tone language (Thai) and non-tone language listeners (English), each subdivided into musician and non-musician groups, engaged in Cantonese tone word training. Participants learned to identify words minimally distinguished by five Cantonese tones during training, also completing musical aptitude and phonemic tone identification tasks. First, the findings suggest that either musical experience or a tone language background leads to significantly better non-native word learning proficiency, as compared to those with neither musical training nor tone language experience. Moreover, the combination of tone language and musical experience did not provide an additional advantage for Thai musicians above and beyond either experience alone. Musicianship was found to be more advantageous than a tone language background for tone identification. Finally, tone identification and musical aptitude scores were significantly correlated with word learning success for English but not Thai listeners. These findings point to a dynamic influence of musical and linguistic experience, both at the tone dentification level and at the word learning stage.
Attention Perception & Psychophysics | 2015
Angela Cooper; Susanne Brouwer; Ann R. Bradlow
Speech processing can often take place in adverse listening conditions that involve the mixing of speech and background noise. In this study, we investigated processing dependencies between background noise and indexical speech features, using a speeded classification paradigm (Garner, 1974; Exp. 1), and whether background noise is encoded and represented in memory for spoken words in a continuous recognition memory paradigm (Exp. 2). Whether or not the noise spectrally overlapped with the speech signal was also manipulated. The results of Experiment 1 indicated that background noise and indexical features of speech (gender, talker identity) cannot be completely segregated during processing, even when the two auditory streams are spectrally nonoverlapping. Perceptual interference was asymmetric, whereby irrelevant indexical feature variation in the speech signal slowed noise classification to a greater extent than irrelevant noise variation slowed speech classification. This asymmetry may stem from the fact that speech features have greater functional relevance to listeners, and are thus more difficult to selectively ignore than background noise. Experiment 2 revealed that a recognition cost for words embedded in different types of background noise on the first and second occurrences only emerged when the noise and the speech signal were spectrally overlapping. Together, these data suggest integral processing of speech and background noise, modulated by the level of processing and the spectral separation of the speech and noise.
Journal of the Acoustical Society of America | 2010
Angela Cooper; Yue Wang
Non‐native linguistic pitch perception is subject to influence from a variety of factors in addition to linguistic background, including musical experience. The present study investigated the effects of musical aptitude and musical experience on Cantonese tone word learning and how these musical factors interact with linguistic experience. Adult native Thai listeners whose native language (L1) is tonal and English listeners with a non‐tonal L1, subdivided into musician and non‐musician groups, engaged in a seven‐session perceptual training program, learning the meanings of 15 novel vocabulary words distinguished by five Cantonese lexical tones. Before training, a musical aptitude task was administered to establish the participants’ level of musicality and auditory discrimination skills. The results show significant group differences in tone word learning proficiency by the end of training. English musicians outperformed their non‐musician counterparts; however, the Thai musicians were not significantly different from the Thai non‐musicians. Results from regression analyzes further show that higher musical aptitude scores predicted tone word learning success for the English group but not for the Thai group. These findings suggest that the influence of musical experience in constructing novel lexical representations of tone words differs as a function of linguistic background.
Language and Speech | 2017
Angela Cooper; Yue Wang; Richard Ashley
Musical experience has been demonstrated to play a significant role in the perception of non-native speech contrasts. The present study examined whether or not musical experience facilitated the normalization of speaking rate in the perception of non-native phonemic vowel length contrasts. Native English musicians and non-musicians (as well as native Thai control listeners) completed identification and AX (same–different) discrimination tasks with Thai vowels contrasting in phonemic length at three speaking rates. Results revealed facilitative effects of musical experience in the perception of Thai vowel length categories. Specifically, the English musicians patterned similarly to the native Thai listeners, demonstrating higher accuracy at identifying and discriminating between-category vowel length distinctions than at discriminating within-category durational differences due to speaking rate variations. The English musicians also outperformed non-musicians at between-category vowel length discriminations across speaking rates, indicating musicians’ superiority in perceiving categorical phonemic length differences. These results suggest that musicians’ attunement to rhythmic and temporal information in music transferred to facilitating their ability to normalize contextual quantitative variations (due to speaking rate) and perceive non-native temporal phonemic contrasts.
Journal of the Acoustical Society of America | 2016
Angela Cooper; Ann R. Bradlow
Adaptation to foreign-accented sentences can be guided by knowledge of the lexical content of those sentences, which, being an exact match for the target, provides feedback on all linguistic levels. The extent to which this feedback needs to match the accented sentence was examined by manipulating the degree of match on different linguistic dimensions, including sub-lexical, lexical, and syntactic levels. Conditions where target-feedback sentence pairs matched and mismatched generated greater transcription improvement over non-English speech feedback, indicating listeners can draw upon sources of linguistic information beyond matching lexical items, such as sub- and supra-lexical information, during adaptation.
Journal of the Acoustical Society of America | 2014
Angela Cooper; Susanne Brouwer; Ann R. Bradlow
Speech processing can often take place in listening conditions that involve the mixing of speech and background noise. This study used a speeded classification paradigm to investigate whether background noise is perceptually integrated with indexical (Exp. 1) and phonetic (Exp. 2) dimensions of the speech signal. In each experiment, English listeners classified words along one of two dimensions: noise (pure tone vs. white noise) or a speech dimension, either gender (male vs. female in Exp. 1a), talker identity (Sue vs. Carol in Exp. 1b), or phoneme (/p/ vs. /b/ in Exp. 2), while ignoring the other dimension, which could be held constant (control), co-vary (correlated), or vary randomly (orthogonal). The results indicated that background noise was not completely segregated from speech, even when the two auditory streams were spectrally non-overlapping. Perceptual interference was asymmetric, whereby irrelevant indexical and phonetic variation slowed noise classification to a greater extent than the reverse...
Journal of the Acoustical Society of America | 2013
Angela Cooper; Ann R. Bradlow
Talker familiarity can facilitate the extraction of linguistic content from speech signals embedded in broadband noise; however, relatively little research has investigated the impact of talker familiarity with competing speech in the background. This study explores the effects of familiarity with the target or competing talker in speech-in-speech perception. Listeners were first familiarized with and trained to identify three female voices. They then completed a sentence recognition task in the presence of 1-talker babble. Familiarity with either the target or background talker was manipulated in separate conditions. Results revealed significantly better sentence recognition for familiar relative to unfamiliar target talkers in the presence of an unfamiliar background talker; however, sentence recognition with an unfamiliar target talker did not differ depending on background talker familiarity. Thus, while listeners were able to capitalize on familiarity with a talker’s voice to aid target speech recogn...
Journal of the Acoustical Society of America | 2009
Angela Cooper; Yue Wang
Previous research has suggested a relationship between musical experience and L2 proficiency. The present study investigated the influence of musical experience on non‐native perception of speaking‐rate varied Thai phonemic vowel length distinctions. Given that musicians are trained to discern temporal distinctions in music, we hypothesized that their musical experience would enhance their ability to perceive non‐native vowel length distinctions as well as their sensitivity to temporal changes as a function of speaking rate. Naive native English listeners of Thai, with and without musical training, as well as native Thai listeners, were presented with minimal pairs of monosyllabic Thai words differing in vowel length at three speaking rates in an identification task and a discrimination task. For identification, participants were asked to identify whether a word contained a long or short vowel. For discrimination, participants heard three successive words and were asked to indicate whether the second word had the same vowel length as the first or last word. The results show significant group differences in identification and discrimination accuracy within and across speaking rates, suggesting that listeners’ perception of phonetic categorical versus temporal acoustic variations differs as a function of linguistic and musical experience.
Journal of the Acoustical Society of America | 2009
Yue Wang; Dawn M. Behne; Angela Cooper; Jung‐Yueh Tu
Lexical tone has generally been found to be processed predominantly in the left hemisphere. However, given that tone is carried by a syllable or a word with segmental information and distinctive meaning, the processing of tone may not be easily disentangled from that of the phonetic segments and word meaning [P. Wong, Brain Res. Bull. 59, 83–95 (2002)]. Indeed, previous research has not examined the lateralization of tone independent of segmental and lexical semantic information. The present study explores how syllable‐based tonal processing in Mandarin Chinese interacts with these different linguistic domains. Using dichotic listening, native Mandarin participants were presented with monosyllabic tonal stimuli constructed with the following different linguistic attributes: (1) real Mandarin words with tonal, segmental phonetic, and lexical semantic information; (2) Mandarin nonwords with tonal and segmental, but no semantic information; (3) nonwords with non‐Mandarin segments (i.e., no native segmental o...
Journal of the Acoustical Society of America | 2008
Yue Wang; Dawn M. Behne; Angela Cooper; Haisheng Jiang; Nina Leung; Jung‐Yueh Tu
This study examines the effects of auditory (A), visual (V), and audio‐visual (AV) training on nonnative speech perception. Mandarin Chinese natives were trained to perceive English voiceless fricatives (in monosyllabic words and nonwords) of three visually distinct places of articulation: interdentals nonexistent in Mandarin, labiodentals and alveolars common in both languages. Participants were randomly assigned to a control group or one of three 2‐week (six sessions, 40 minutes/session) training groups with a different input modality: A, V, or AV. In pre‐ and post‐tests, the fricatives are presented in four ways for an identification task: A‐only, V‐only, AV congruent (AVc), and AV incongruent (AVi). Additionally, three generalization posttests are administered testing voiced fricatives, new real words, and a new speaker. Results show that post‐training, the trainees reveal: (1) improvements corresponding to training type (e.g., the V‐training group improves most for the V‐only stimuli), (2) greater improvements for the familiar (but less visually distinct) alveolars than for the new interdentals, (3) decreased AV‐fusion for the AVi stimuli, and (4) consistent patterns in the generalization tests. Results are discussed in terms of the effects of speech input modality, experience, and L1 on L2 AV speech learning. [Research supported by SSHRC]