Tuuli H. Morrill
George Mason University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Tuuli H. Morrill.
Psychological Science | 2014
Melissa Baese-Berk; Christopher C. Heffner; Laura C. Dilley; Mark A. Pitt; Tuuli H. Morrill; J. Devin McAuley
Humans unconsciously track a wide array of distributional characteristics in their sensory environment. Recent research in spoken-language processing has demonstrated that the speech rate surrounding a target region within an utterance influences which words, and how many words, listeners hear later in that utterance. On the basis of hypotheses that listeners track timing information in speech over long timescales, we investigated the possibility that the perception of words is sensitive to speech rate over such a timescale (e.g., an extended conversation). Results demonstrated that listeners tracked variation in the overall pace of speech over an extended duration (analogous to that of a conversation that listeners might have outside the lab) and that this global speech rate influenced which words listeners reported hearing. The effects of speech rate became stronger over time. Our findings are consistent with the hypothesis that neural entrainment by speech occurs on multiple timescales, some lasting more than an hour.
Journal of the Acoustical Society of America | 2015
Melissa Baese-Berk; Tuuli H. Morrill
Non-native speech differs from native speech in multiple ways. Previous research has described segmental and suprasegmental differences between native and non-native speech in terms of group averages. For example, average speaking rate for non-natives is slower than for natives. However, it is unknown whether non-native speech is also more variable than native speech. This study introduces a method of comparing rate change across utterances, demonstrating that non-native speaking rate is more variable than native speech. These results suggest that future work examining non-native speech perception and production should investigate both mean differences and variability in the signal.
Cognition | 2014
Tuuli H. Morrill; Laura C. Dilley; J. Devin McAuley; Mark A. Pitt
Due to extensive variability in the phonetic realizations of words, there may be few or no proximal spectro-temporal cues that identify a words onset or even its presence. Dilley and Pitt (2010) showed that the rate of context speech, distal from a to-be-recognized word, can have a sizeable effect on whether or not a word is perceived. This investigation considered whether there is a distinct role for distal rhythm in the disappearing word effect. Listeners heard sentences that had a grammatical interpretation with or without a critical function word (FW) and transcribed what they heard (e.g., are in Jill got quite mad when she heard there are birds can be removed and Jill got quite mad when she heard their birds is still grammatical). Consistent with a perceptual grouping hypothesis, participants were more likely to report critical FWs when distal rhythm (repeating ternary or binary pitch patterns) matched the rhythm in the FW-containing region than when it did not. Notably, effects of distal rhythm and distal rate were additive. Results demonstrate a novel effect of distal rhythm on the amount of lexical material listeners hear, highlighting the importance of distal timing information and providing new constraints for models of spoken word recognition.
Journal of Phonetics | 2014
Tuuli H. Morrill; Laura C. Dilley; J. Devin McAuley
Abstract Prosodic structure is often perceived as exhibiting regularities in the patterning of tone sequences or stressed syllables. Recently, prosodic regularities in the distal (non-local) context have been shown to influence the perceived prosodic constituency of syllables. Three experiments tested the nature of distal prosodic patterns influencing perceptions of prosodic structure, using eight-syllable items ending in ambiguous lexical structures (e.g., tie murder bee, timer derby). For distinct combinations of distal fundamental frequency (f0) and/or timing cues, two patterns were resynthesized on the initial five syllables of experimental items; these were predicted to favor prosodic grouping of final syllables such that listeners would hear a final disyllabic or monosyllabic word, respectively. Results showed distal prosodic patterning affected perceived prosodic constituency when (1) patterns consisted of regularity in timing cues, f0 cues, or both (Experiments 1–2); (2) items ended with either a low–high (Experiment 1) or a high–low (Experiment 2) tonal pattern; and (3) tonal patterns consisted of alternating low and high-pitched syllables with progressive f0 decrease, i.e., a ‘downtrend’ (Experiment 3). The results reveal that a variety of prosodic patterns in the distal context can influence perceived prosodic constituency and thus lexical processing, and provide a perceptually-motivated explanation for the organization of acoustic speech input into prosodic constituents.
Frontiers in Psychology | 2013
Laura C. Dilley; Tuuli H. Morrill; Elina Banzina
Recent findings [Dilley and Pitt, 2010. Psych. Science. 21, 1664–1670] have shown that manipulating context speech rate in English can cause entire syllables to disappear or appear perceptually. The current studies tested two rate-based explanations of this phenomenon while attempting to replicate and extend these findings to another language, Russian. In Experiment 1, native Russian speakers listened to Russian sentences which had been subjected to rate manipulations and performed a lexical report task. Experiment 2 investigated speech rate effects in cross-language speech perception; non-native speakers of Russian of both high and low proficiency were tested on the same Russian sentences as in Experiment 1. They decided between two lexical interpretations of a critical portion of the sentence, where one choice contained more phonological material than the other (e.g., /str′na/ “side” vs. /str′na/ “country”). In both experiments, with native and non-native speakers of Russian, context speech rate and the relative duration of the critical sentence portion were found to influence the amount of phonological material perceived. The results support the generalized rate normalization hypothesis, according to which the content perceived in a spectrally ambiguous stretch of speech depends on the duration of that content relative to the surrounding speech, while showing that the findings of Dilley and Pitt (2010) extend to a variety of morphosyntactic contexts and a new language, Russian. Findings indicate that relative timing cues across an utterance can be critical to accurate lexical perception by both native and non-native speakers.
Psychonomic Bulletin & Review | 2015
Tuuli H. Morrill; Melissa M. Baese-Berk; Christopher C. Heffner; Laura C. Dilley
During lexical access, listeners use both signal-based and knowledge-based cues, and information from the linguistic context can affect the perception of acoustic speech information. Recent findings suggest that the various cues used in lexical access are implemented with flexibility and may be affected by information from the larger speech context. We conducted 2 experiments to examine effects of a signal-based cue (distal speech rate) and a knowledge-based cue (linguistic structure) on lexical perception. In Experiment 1, we manipulated distal speech rate in utterances where an acoustically ambiguous critical word was either obligatory for the utterance to be syntactically well formed (e.g., Conner knew that bread and butter (are) both in the pantry) or optional (e.g., Don must see the harbor (or) boats). In Experiment 2, we examined identical target utterances as in Experiment 1 but changed the distribution of linguistic structures in the fillers. The results of the 2 experiments demonstrate that speech rate and linguistic knowledge about critical word obligatoriness can both influence speech perception. In addition, it is possible to alter the strength of a signal-based cue by changing information in the speech environment. These results provide support for models of word segmentation that include flexible weighting of signal-based and knowledge-based cues.
Psychonomic Bulletin & Review | 2015
Tuuli H. Morrill; J. Devin McAuley; Laura C. Dilley; Patrycja A. Zdziarska; Katherine B. Jones; Lisa D. Sanders
The distal prosodic patterning established at the beginning of an utterance has been shown to influence downstream word segmentation and lexical access. In this study, we investigated whether distal prosody also affects word learning in a novel (artificial) language. Listeners were exposed to syllable sequences in which the embedded words were either congruent or incongruent with the distal prosody of a carrier phrase. Local segmentation cues, including the transitional probabilities between syllables, were held constant. During a test phase, listeners rated the items as either words or nonwords. Consistent with the perceptual grouping of syllables being predicted by distal prosody, congruent items were more likely to be judged as words than were incongruent items. The results provide the first evidence that perceptual grouping affects word learning in an unknown language, demonstrating that distal prosodic effects may be independent of lexical or other language-specific knowledge.
Journal of Experimental Psychology: General | 2015
Tuuli H. Morrill; J. Devin McAuley; Laura C. Dilley; David Z. Hambrick
Do the same mechanisms underlie processing of music and language? Recent investigations of this question have yielded inconsistent results. Likely factors contributing to discrepant findings are use of small samples and failure to control for individual differences in cognitive ability. We investigated the relationship between music and speech prosody processing, while controlling for cognitive ability. Participants (n = 179) completed a battery of cognitive ability tests, the Montreal Battery of Evaluation of Amusia (MBEA) to assess music perception, and a prosody test of pitch peak timing discrimination (early, as in insight vs. late, incite). Structural equation modeling revealed that only music perception was a significant predictor of prosody test performance. Music perception accounted for 34.5% of variance on prosody test performance; cognitive abilities and music training added only about 8%. These results indicate musical pitch and temporal processing are highly predictive of pitch discrimination in speech processing, even after controlling for other possible predictors of this aspect of language processing.
Linguistics Vanguard | 2018
Melissa Baese-Berk; Tuuli H. Morrill; Laura C. Dilley
Abstract Phonological knowledge is influenced by a variety of cues that reflect predictability (e.g. semantic predictability). Listeners utilize various aspects of predictability when determining what they have heard. In the present paper, we ask how aspects of the acoustic phonetic signal (e.g. speaking rate) interact with other knowledge reflecting predictability (e.g. lexical frequency and collocation strength) to influence how speech is perceived. Specifically, we examine perception of function words by native and non-native speakers. Our results suggest that both native and non-native speakers are sensitive to factors that influence the predictability of the signal, including speaking rate, frequency, and collocation strength, when listening to speech, and use these factors to predict the phonological structure of stretches of ambiguous speech. However, reliance on these cues differs as a function of their experience and proficiency with the target language. Non-native speakers are less sensitive to some aspects of the acoustic phonetic signal (e.g. speaking rate). However, they appear to be quite sensitive to other factors, including frequency. We discuss how these results inform our understanding of the interplay between predictability and speech perception by different listener populations and how use of features reflecting predictability interacts with recovery of phonological structure of spoken language.
Journal of the Acoustical Society of America | 2017
Melissa Baese-Berk; Tuuli H. Morrill
Substantial research has examined the factors making non-native speech more difficult to understand than native speech. Prior work has suggested that speaking rate is one such factor, with slower speech being perceived as less comprehensible and more accented. Further, non-native speech is produced with shorter utterances and more frequent pauses than native speech. Recent work has suggested that in addition to non-native speech being produced more slowly than native speech, it is produced with a more variable speaking rate. In the present study, we examine the relationship between variability in speaking rate, pausing and utterance length, and intelligibility and fluency ratings of non-native speech. We asked listeners to transcribe sentences produced by non-native speakers and to rate the fluency of read speech. Preliminary results suggest that rate variability does correlate with intelligibility of non-native speech, but not of native speech, and rate variability does not correlate as strongly with flu...