Kara Hawthorne
University of Arizona
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Kara Hawthorne.
Journal of the Acoustical Society of America | 2018
Kara Hawthorne
Across languages, prosodic boundaries tend to align with syntactic boundaries, and both infant and adult language learners capitalize on these correlations to jump-start syntax acquisition. However, it is unclear which prosodic cues-pauses, final-syllable lengthening, and/or pitch resets across boundaries-are necessary for prosodic bootstrapping to occur. It is also unknown how syntax acquisition is impacted when listeners do not have access to the full range of prosodic or spectral information. These questions were addressed using 14-channel noise-vocoded (spectrally degraded) speech. While pre-boundary lengthening and pauses are well-transmitted through noise-vocoded speech, pitch is not; overall intelligibility is also decreased. In two artificial grammar experiments, adult native English speakers showed a similar ability to use English-like prosody to bootstrap unfamiliar syntactic structures from degraded speech and natural, unmanipulated speech. Contrary to previous findings that listeners may require pitch resets and final lengthening to co-occur if no pause cue is present, participants in the degraded speech conditions were able to detect prosodic boundaries from lengthening alone. Results suggest that pitch is not necessary for adult English speakers to perceive prosodic boundaries associated with syntactic structures, and that prosodic bootstrapping is robust to degraded spectral information.
Journal of Phonetics | 2018
Kara Hawthorne; Juhani Järvikivi; Benjamin V. Tucker
Abstract The majority of English nouns, verbs, and adjectives begin with a stressed syllable, and listeners exploit this tendency to help parse the continuous stream of speech into individual words. However, the acoustic manifestation of stress depends on the variety of English being spoken. In two visual world eye-tracking experiments, we tested if Indian English-accented speech causes Canadian English listeners to make stress-based segmentation errors. Participants heard Canadian- or Indian-accented trisyllabic sequences that could be segmented in two ways, depending on the perceived location of stress. For example, [hae.pi.tsə] could be segmented as happy/[tsə] if it is perceived to have stress on the first syllable or as [hae]/pizza if it is perceived to have stress on the second syllable. Results suggest that Indian English-accented speech impairs segmentation in Canadian listeners, and that both accented pitch and other features of the Indian English accent contribute to segmentation difficulties. Findings are interpreted with respect to models of how similarity between two languages impacts the listener’s ability to segment words from the speech stream.
Journal of the Acoustical Society of America | 2017
Jessamyn Schertz; Kara Hawthorne
Speech perception requires integration of multiple sources of information, including bottom-up acoustic information and top-down contextual information, and listeners may adjust their reliance on a given source of information depending on the communicative context. This work tests the hypothesis that listeners increase reliance on contextual, relative to acoustic, information when listening to a talker with a foreign accent, under the assumption that the bottom-up information (non-native pronunciation) may be less reliable. Native English listeners categorized an utterance-final target word, where the initial consonant systematically varied in voice onset time (VOT), as either “goat” or “coat.” Target words were embedded in carrier sentences contextually biased towards one of the words (e.g., “The girl milked the [coat/goat]” vs. “The girl put on her [coat/goat]”). Stimuli were created from productions by two talkers: a native English talker and a native Mandarin/L2 English talker with a discernable forei...
Journal of the Acoustical Society of America | 2013
Kara Hawthorne; LouAnn Gerken
In spoken language, particularly of the infant-directed variety, clauses are marked with prosodic cues, such as final lengthening and pitch resets at boundaries (e.g., Soderstrom et al., 2008). Though prosody is specific to language, similar acoustic cues mark phrase boundaries in music, and infants are sensitive to the correlation of these cues at musical boundaries (Jusczyk and Krumhansl, 1993). In the present study, we ask whether younger (4-month-old) and older (16-month-old) infants can use prosody-like boundary cues to facilitate recognition of musical phrases. In a musical extension of Soderstrom et al. (2005), infants were familiarized with one of two brief melodies, then were tested using the head-turn preference procedure on their ability to recognize a phrase versus phrase-straddling excerpt from the familiarization melody when it is embedded in a new musical passage. Preliminary results suggest that younger infants show a familiarity preference for the test melody containing a phrase from the ...
Cognition | 2014
Kara Hawthorne; LouAnn Gerken
Journal of Memory and Language | 2015
Kara Hawthorne; Reiko Mazuka; LouAnn Gerken
Infancy | 2016
Kara Hawthorne; Lauren Rudat; Lou Ann Gerken
Journal of the Acoustical Society of America | 2018
Jessamyn Schertz; Kara Hawthorne
Cognitive Science | 2016
Kara Hawthorne; Anja Arnhold; Emily Sullivan; Juhani Järvikivi
LSA Annual Meeting Extended Abstracts | 2011
Kara Hawthorne