Mara Breen
Mount Holyoke College
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Mara Breen.
Language and Cognitive Processes | 2010
Mara Breen; Evelina Fedorenko; Michael Wagner; Edward Gibson
This paper reports three studies aimed at addressing three questions about the acoustic correlates of information structure in English: (1) do speakers mark information structure prosodically, and, to the extent they do; (2) what are the acoustic features associated with different aspects of information structure; and (3) how well can listeners retrieve this information from the signal? The information structure of subject–verb–object sentences was manipulated via the questions preceding those sentences: elements in the target sentences were either focused (i.e., the answer to a wh-question) or given (i.e., mentioned in prior discourse); furthermore, focused elements had either an implicit or an explicit contrast set in the discourse; finally, either only the object was focused (narrow object focus) or the entire event was focused (wide focus). The results across all three experiments demonstrated that people reliably mark (1) focus location (subject, verb, or object) using greater intensity, longer duration, and higher mean and maximum F0, and (2) focus breadth, such that narrow object focus is marked with greater intensity, longer duration, and higher mean and maximum F0 on the object than wide focus. Furthermore, when participants are made aware of prosodic ambiguity present across different information structures, they reliably mark focus type, so that contrastively focused elements are produced with greater intensity, longer duration, and lower mean and maximum F0 than noncontrastively focused elements. In addition to having important theoretical consequences for accounts of semantics and prosody, these experiments demonstrate that linear residualisation successfully removes individual differences in peoples productions thereby revealing cross-speaker generalisations. Furthermore, discriminant modelling allows us to objectively determine the acoustic features that underlie meaning differences.
Corpus Linguistics and Linguistic Theory | 2012
Mara Breen; Laura C. Dilley; John Kraemer; Edward Gibson
Abstract Speech researchers often rely on human annotation of prosody to generate data to test hypotheses and generate models. We present an overview of two prosodic annotation systems: ToBI (Tones and Break Indices) (Silverman et al., 1992), and RaP (Rhythm and Pitch) (Dilley & Brown, 2005), which was designed to address several limitations of ToBI. The paper reports two large-scale studies of inter-transcriber reliability for ToBI and RaP. Comparable reliability for both systems was obtained for a variety of prominence- and boundary-related agreement categories. These results help to establish RaP as an alternative to ToBI for research and technology applications.
Cognition | 2012
Roger Levy; Evelina Fedorenko; Mara Breen; Edward Gibson
In most languages, most of the syntactic dependency relations found in any given sentence are projective: the word-word dependencies in the sentence do not cross each other. Some syntactic dependency relations, however, are non-projective: some of their word-word dependencies cross each other. Non-projective dependencies are both rarer and more computationally complex than projective dependencies; hence, it is of natural interest to investigate whether there are any processing costs specific to non-projective dependencies, and whether factors known to influence processing of projective dependencies also affect non-projective dependency processing. We report three self-paced reading studies, together with corpus and sentence completion studies, investigating the comprehension difficulty associated with the non-projective dependencies created by the extraposition of relative clauses in English. We find that extraposition over either verbs or prepositional phrases creates comprehension difficulty, and that this difficulty is consistent with probabilistic syntactic expectations estimated from corpora. Furthermore, we find that manipulating the expectation that a given noun will have a postmodifying relative clause can modulate and even neutralize the difficulty associated with extraposition. Our experiments rule out accounts based purely on derivational complexity and/or dependency locality in terms of linear positioning. Our results demonstrate that comprehenders maintain probabilistic syntactic expectations that persist beyond projective-dependency structures, and suggest that it may be possible to explain observed patterns of comprehension difficulty associated with extraposition entirely through probabilistic expectations.
Language and Cognitive Processes | 2011
Mara Breen; Duane G. Watson; Edward Gibson
This paper evaluates two classes of hypotheses about how people prosodically segment utterances: (1) meaning-based proposals, with a focus on Watson and Gibsons (2004) proposal, according to which speakers tend to produce boundaries before and after long constituents; and (2) balancing proposals, according to which speakers tend to produce boundaries at evenly spaced intervals. In order to evaluate these proposals, we elicited naïve speakers’ productions of sentences systematically varying in the length of three postverbal constituents: a direct object, an indirect object (a prepositional phrase), and a verb phrase modifier, as in the sentence, The teacher assigned the chapter (on local history) to the students (of social science) yesterday/before the first midterm exam. Mixed-effects modelling was used to analyse the pattern of prosodic boundaries in these sentences, where boundaries were defined either in terms of acoustic measures (word duration and silence) or following the ToBI (Tones and Break Indices) prosodic annotation scheme. Watson and Gibsons (2004) meaning-based proposal, with the additional constraint that boundary predictions are evaluated with respect to local sentence context rather than the entire sentence, significantly outperformed the balancing alternatives.
Language and Linguistics Compass | 2014
Mara Breen
Recently, psycholinguistics has seen an increase in the number of empirical studies investigating the role of implicit (silent) prosodic representations in reading. The current paper reviews studies from the last several years conducted to investigate Fodor’s (2002) Implicit Prosody Hypothesis, which maintains that even during silent reading, readers generate representations of sentence intonation, phrasing, stress, and rhythm, and that these representations can affect readers’ interpretation of the text. We argue that the accumulated evidence suggests that implicit prosody can influence online sentence interpretation and explore the implications of these findings for models of sentence processing. For over one hundred years, researchers have wondered about the nature of the inner voice during silent reading. Huey (1908/1968) was one of the first to ponder this idea, concluding: ‘The simple fact is that the inner saying or hearing of what is read seems to be the core of ordinary reading, the “thing in itself,” so far as there is such a part of such a complex process’ (p. 122). This assumption, that the inner voice is part and parcel of any normal reading, has been maintained for the majority of the 20th century. Chafe (1988) recounts the writings of Eudora Welty and Russell C. Long on the topic, concluding: ‘I am not alone in believing that writers when they write, and readers when they read, experience auditory imagery of specific intonations, accents, pauses, rhythms, and voice qualities, even if the writing itself may show these features poorly, if at all. This “covert prosody” of written language is evidently something that is quite apparent to a reflective writer or reader’ (p. 397). An increasing interest in spoken language over the last 25years has inspired psycholinguistic researchers to begin to critically consider the role of the inner voice during reading. One of the main questions concerning researchers is whether the inner voice serves a purpose during reading. Is the producing, or hearing, of words and phrases during reading simply epiphenomenal—a by-product of the fact that language has been spoken far longer than it has been written (Gelb 1952), or does it enhance the reader’s processing and understanding of the written word? The goal of the current paper is to review recent psycholinguistic investigations of silent reading to begin to answer this question. Studies of phonology’s role in reading have focused on two types of phonological representation: segmental phonology and suprasegmental phonology. Segmental phonology concerns the individual phonemes which make up words, whereas suprasegmental phonology deals with sound phenonema above the level of the word; that is, acoustic information that does not serve to distinguish one word from another, but rather conveys information about the semantic context of the word, or the attitude of the speaker. For example, the segmental features of the word ‘fire’ include the phonemes /f/ /aI/ and /r/, whereas suprasegmental features determine whether the word is produced as a statement (‘Fire.’), a question (‘Fire?’),
Quarterly Journal of Experimental Psychology | 2013
Mara Breen; Charles Clifton
Breen and Clifton (Stress matters: Effects of anticipated lexical stress on silent reading. Journal of Memory and Language, 2011, 64, 153–170) argued that readers’ eye movements during silent reading are influenced by the stress patterns of words. This claim was supported by the observation that syntactic reanalysis that required concurrent metrical reanalysis (e.g., a change from the noun form of abstract to the verb form) resulted in longer reading times than syntactic reanalysis that did not require metrical reanalysis (e.g., a change from the noun form of report to the verb form). However, the data contained a puzzle: The disruption appeared on the critical word (abstract, report) itself, although the material that forced the part of speech change did not appear until the next region. Breen and Clifton argued that parafoveal preview of the disambiguating material triggered the revision and that the eyes did not move on until a fully specified lexical representation of the critical word was achieved. The present experiment used a boundary change paradigm in which parafoveal preview of the disambiguating region was prevented. Once again, an interaction was observed: Syntactic reanalysis resulted in particularly long reading times when it also required metrical reanalysis. However, now the interaction did not appear on the critical word, but only following the disambiguating region. This pattern of results supports Breen and Cliftons claim that readers form an implicit metrical representation of text during silent reading.
Language, cognition and neuroscience | 2014
Mara Breen; Laura C. Dilley; J. Devin McAuley; Lisa D. Sanders
Prosodic context several syllables prior (i.e., distal) to an ambiguous word boundary influences speech segmentation. To assess whether distal prosody influences early perceptual processing or later lexical competition, EEG was recorded while subjects listened to eight-syllable sequences with ambiguous word boundaries for the last four syllables (e.g., tie murder bee vs. timer derby). Pitch and duration of the first five syllables were manipulated to induce sequence segmentation with either a monosyllabic or disyllabic final word. Behavioural results confirmed a successful manipulation. Moreover, penultimate syllables (e.g., der) elicited a larger anterior positivity 200–500 ms after the onset for prosodic contexts predicted to induce word-initial perception of these syllables. Final syllables (e.g. bee) elicited a similar anterior positivity in the context predicted to induce word-initial perception of these syllables. Additionally, these final syllables elicited a larger positive-to-negative deflection (P1-N1) 60–120 ms after onset, and a larger N400. The finding that prosodic characteristics of speech several syllables prior to ambiguous word boundaries modulate both early and late event-related potentials (ERPs) elicited by subsequent syllable onsets provides evidence that distal prosody influences early perceptual processing and later lexical competition.
Journal of Experimental Psychology: General | 2018
Adam Tierney; P. Aniruddh; Mara Breen
In the “speech-to-song illusion,” certain spoken phrases are heard as highly song-like when isolated from context and repeated. This phenomenon occurs to a greater degree for some stimuli than for others, suggesting that particular cues prompt listeners to perceive a spoken phrase as song. Here we investigated the nature of these cues across four experiments. In Experiment 1, participants were asked to rate how song-like spoken phrases were after each of eight repetitions. Initial ratings were correlated with the consistency of an underlying beat and within-syllable pitch slope, while rating change was linked to beat consistency, within-syllable pitch slope, and melodic structure. In Experiment 2, the within-syllable pitch slope of the stimuli was manipulated, and this manipulation changed the extent to which participants heard certain stimuli as more musical than others. In Experiment 3, the extent to which the pitch sequences of a phrase fit a computational model of melodic structure was altered, but this manipulation did not have a significant effect on musicality ratings. In Experiment 4, the consistency of intersyllable timing was manipulated, but this manipulation did not have an effect on the change in perceived musicality after repetition. Our methods provide a new way of studying the causal role of specific acoustic features in the speech-to-song illusion via subtle acoustic manipulations of speech, and show that listeners can rapidly (and implicitly) assess the degree to which nonmusical stimuli contain musical structure.
Cognition | 2018
Mara Breen
Word durations convey many types of linguistic information, including intrinsic lexical features like length and frequency and contextual features like syntactic and semantic structure. The current study was designed to investigate whether hierarchical metric structure and rhyme predictability account for durational variation over and above other features in productions of a rhyming, metrically-regular childrens book: The Cat in the Hat (Dr. Seuss, 1957). One-syllable word durations and inter-onset intervals were modeled as functions of segment number, lexical frequency, word class, syntactic structure, repetition, and font emphasis. Consistent with prior work, factors predicting longer word durations and inter-onset intervals included more phonemes, lower frequency, first mention, alignment with a syntactic boundary, and capitalization. A model parameter corresponding to metric grid height improved model fit of word durations and inter-onset intervals. Specifically, speakers realized five levels of metric hierarchy with inter-onset intervals such that interval duration increased linearly with increased height in the metric hierarchy. Conversely, speakers realized only three levels of metric hierarchy with word duration, demonstrating that they shortened the highly predictable rhyme resolutions. These results further understanding of the factors that affect spoken word duration, and demonstrate the myriad cues that children receive about linguistic structure from nursery rhymes.
Archive | 2015
Mara Breen
Fodor’s introduction of the implicit prosody hypothesis (IPH; 2002) inspired a series of studies exploring how readers’ “inner voice” influences sentence comprehension. In this chapter, I review the history of the IPH and a variety of studies which have demonstrated that implicit phrasing, accentuation, and rhythm appear to play a role in syntactic parsing. I explore how work moving forward might address the question of the psychological reality of the “inner voice,” and how we can investigate the relative contribution of implicit prosody to sentence processing in consideration of other known information sources.