Michael McAuliffe
University of British Columbia
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Michael McAuliffe.
Frontiers in Psychology | 2013
Molly Babel; Michael McAuliffe; Graham Haber
This study examines spontaneous phonetic accommodation of a dialect with distinct categories by speakers who are in the process of merging those categories. We focus on the merger of the NEAR and SQUARE lexical sets in New Zealand English, presenting New Zealand participants with an unmerged speaker of Australian English. Mergers-in-progress are a uniquely interesting sound change as they showcase the asymmetry between speech perception and production. Yet, we examine mergers using spontaneous phonetic imitation, which is phenomenon that is necessarily a behavior where perceptual input influences speech production. Phonetic imitation is quantified by a perceptual measure and an acoustic calculation of mergedness using a Pillai-Bartlett trace. The results from both analyses indicate spontaneous phonetic imitation is moderated by extra-linguistic factors such as the valence of assigned conditions and social bias. We also find evidence for a decrease in the degree of mergedness in post-exposure productions. Taken together, our results suggest that under the appropriate conditions New Zealanders phonetically accommodate to Australian English and that in the process of speech imitation, mergers-in-progress can, but do not consistently, become less merged.
Journal of the Acoustical Society of America | 2011
Michael McAuliffe; Molly Babel
There is a clear link between the discourse status of a word and the degree of reduction. For instance, Gregory [Dissertation (2002)] provided evidence that hearer knowledge affected reduction in production for discourse-old items. Lexical information, such as word frequency, also plays a crucial role in the degree of reduction [Fosler-Lussier and Morgan, Speech Commun. (1999)]. These previous studies either looked at short discourses or words isolated from context. Therefore, the current study investigates longer discourses, using the VIC Corpus [Pitt etal., Corpus (2007)]. The primary question is to what degree do repeated uses cause further reductions, and if the reductions are syllabically and segmentally uniform across a word. Many studies on reduction rely on the intuition that reductions occur when information load is light, such as when the word was repeated recently or has a high probability of occurrence, so the prediction is that unstressed syllables would show more reduction than stressed syll...
conference of the international speech communication association | 2016
Michael McAuliffe; Molly Babel; Charlotte Vaughn
Perceptual learning of novel pronunciations is a seemingly robust and efficient process for adapting to unfamiliar speech patterns. In this study we compare perceptual learning of /s/ words where a medially occurring /s/ is substituted with /S/, rendering, for example, castle as /kæSl/ instead of /kæsl/. Exposure to the novel pronunciations is presented in the guise of a lexical decision task. Perceptual learning is assessed in a categorization task where listeners are presented with minimal pair continua (e.g., sock-shock). Given recent suggestions that perceptual learning may be more robust with natural as opposed to synthesized speech, we compare perceptual learning in groups that either receive natural /s/-to-/S/ words or resynthesized /s/-to-/S/ words. Despite low word endorsement rates in the lexical decision task, both groups of listeners show robust generalization in perceptual learning to the novel minimal pair continua, thereby indicating that at least with high quality resynthesis, perceptual learning in natural and synthesized speech is roughly equivalent.
Journal of the Acoustical Society of America | 2016
Irene de la Cruz-Pavía; Judit Gervain; Michael McAuliffe; Eric Vatikiotis-Bateson; Janet F. Werker
The acoustic realization of phrasal prominence correlates systematically with the order of Verbs and Objects in natural languages. Prominence is realized as a durational contrast in V(erb)-O(bject) languages (English: short-long, to [Ro]me), and as a pitch/intensity contrast in O(bject)-V(erb) languages (Japanese: high-low, [‘To] kyo ni). Seven-month-old infants can use phrasal prominence to segment unknown artificial languages. This information might thus allow prelexical infants to learn the basic word order of their native language(s). The present study investigates whether this differing realization of phrasal prominence is also found in I(nfant) D(irected) S(peech), previously unexamined speech style. We recorded 15 adult native talkers of languages with opposite word orders producing target phrases (English: VO, e.g., behind furniture, Japanese: OV, e.g., jisho niwa) embedded in carrier sentences, both in A(dult) D(irected) S(peech) and IDS, and conducted acoustic analysis of the phrases (i.e., pitc...
Journal of the Acoustical Society of America | 2013
Michael McAuliffe
Listeners are able to rapidly shift their acoustic-phonetic categories in the face of speaker-specific variation. However, an open question is how top-down expectations can influence this perceptual adaptation. Under Adaptive Resonance Theory [Grossberg (2003)], if an acoustic token provides initial activation of a linguistic category, top-down expectations about that category can boost the resonance and lead to activation of the category even with an ambiguous token. Ambiguous tokens in positions with stronger expectations are hypothesized to lead to less perceptual adaptation than in positions with lower expectations. To test this hypothesis, two groups of participants will be exposed to 10 ambiguous s-∫ sounds during a lexical decision exposure task. The groups differ in where in the word these sounds will occur, either in the onset or coda of CVC words. Lexical bias, as a form of top-down expectations, is stronger for ambiguous coda sounds [Pitt and Szostak (2012)]. I predict that in a s-∫ categorizat...
Journal of the Acoustical Society of America | 2012
Michael McAuliffe; Molly Babel
Previous research on vowel realizations within the formant space has found effects for lexical factors such as word frequency in both laboratory settings (Wright, 2004; Munson & Solomon, 2004; and others) and in spontaneous speech (Gahl, Yao, & Johnson, 2012). In addition to lexical factors, semantic context has also been found to influence vowel realizations in laboratory settings, such as emphatic/non-emphatic contexts (Fox & Jacewicz, 2009) and whether a word is predictable from the preceding words (Clopper & Pierrehumbert, 2008). The current project looks at whether effects on vowel realization for semantic context from the laboratory can be extended to spontaneous speech. As in Gahl, Yao, and Johnson (2012), the Buckeye Corpus (Pitt et al., 2007) will be used, with the same predictors used there with the addition of a semantic predictability measure. Semantic predictability for a given word will be calculated based on relatedness of that word to words five seconds before the word or less, where relat...
Journal of the Acoustical Society of America | 2011
Michael McAuliffe
There appears to be a systematic difference in the dialectal variation of phonemes within languages. In his treatment of English dialects, Wells [(1982)] had three volumes detailing the vocalic differences of the dialects of English, but he remarked that these dialects “do not differ very greatly in their consonant systems” (1982:178). Conversely, Hualde [(2005)] detailed the dialectal variation of each consonant in detail, but only sad of vocalic variation that “vowel qualities are remarkably stable among Spanish dialects” (2005:128). These differences in variation suggest that English values consonants as more integral to a word, and Spanish values vowels more, as those are the more stable segments in their respective languages. To empirically test this observation, a lexical decision task [Goldinger (1996)] is employed, with native speakers of both English and Spanish, using nonwords varying from attested words either in a single consonant (such as ship versus shid in English) or a single vowel (such as ship versus shup). The predicted outcome is that languages that value consonants, such as English, will make the decision faster when the consonant is changed, and languages that value vowels, such as Spanish, will make the decision faster when the vowel is changed.
conference of the international speech communication association | 2017
Michael McAuliffe; Michaela Socolof; Sarah Mihuc; Michael Wagner; Morgan Sonderegger
Journal of the Acoustical Society of America | 2016
Michael McAuliffe; Molly Babel
conference of the international speech communication association | 2017
Michael Wagner; Michael McAuliffe