Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Susan M. Garnsey is active.

Publication


Featured researches published by Susan M. Garnsey.


Psychonomic Bulletin & Review | 2010

Driving impairs talking

Ensar Becic; Gary S. Dell; Kathryn Bock; Susan M. Garnsey; Tate Kubose; Arthur F. Kramer

It is well known that conversation (e.g, on a cell phone) impairs driving. We demonstrate that the reverse is also true: Language production and comprehension, and the encoding of the products of comprehension into memory, are less accurate when one is driving. Ninety-six pairs of drivers and conversation partners engaged in a story-retelling task in a driving simulator. Half of the pairs were older adults. Each pair completed one dual-task block (driving during the retelling task) and two single-task control blocks. The results showed a decline in the accuracy of the drivers’ storytelling and of their memory for stories that were told to them by their nondriving partners. Speech production suffered an additional cost when the difficulty of driving increased. Measures of driving performance suggested that the drivers gave priority to the driving task when they were conversing. As a result, their linguistic performance suffered.


Proceedings of the National Academy of Sciences of the United States of America | 2007

Imaging cortical dynamics of language processing with the event-related optical signal

Chun-Yu Tse; Chia-Lin Lee; Jason Sullivan; Susan M. Garnsey; Gary S. Dell; Monica Fabiani; Gabriele Gratton

Language processing involves the rapid interaction of multiple brain regions. The study of its neurophysiological bases would therefore benefit from neuroimaging techniques combining both good spatial and good temporal resolution. Here we use the event-related optical signal (EROS), a recently developed imaging method, to reveal rapid interactions between left superior/middle temporal cortices (S/MTC) and inferior frontal cortices (IFC) during the processing of semantically or syntactically anomalous sentences. Participants were presented with sentences of these types intermixed with nonanomalous control sentences and were required to judge their acceptability. ERPs were recorded simultaneously with EROS and showed the typical activities that are elicited when processing anomalous stimuli: the N400 and the P600 for semantic and syntactic anomalies, respectively. The EROS response to semantically anomalous words showed increased activity in the S/MTC (corresponding in time with the N400), followed by IFC activity. Syntactically anomalous words evoked a similar sequence, with a temporal-lobe EROS response (corresponding in time with the P600), followed by frontal activity. However, the S/MTC activity corresponding to a semantic anomaly was more ventral than that corresponding to a syntactic anomaly. These data suggest that activation related to anomaly processing in sentences proceeds from temporal to frontal brain regions for both semantic and syntactic anomalies. This first EROS study investigating language processing shows that EROS can be used to image rapid interactions across cortical areas.


Language and Cognitive Processes | 1993

Event-Related Brain Potentials in the Study of Language: An Introduction.

Susan M. Garnsey

Abstract This introduction explains several aspects of the measurement and analysis of event-related brain potentials, and also discusses some interpretation issues which are especially relevant for language studies. It is intended to provide background for language researchers who are not familiar with the methodology, so that they will be able to understand the accompanying experimental papers in this special issue.


Archive | 2010

Animacy and the Resolution of Temporary Ambiguity in Relative Clause Comprehension in Mandarin

Yowyu Lin; Susan M. Garnsey

Relative clause comprehension requires figuring out the role of the head noun in the relative clause. English speakers find it easier to understand relative clauses in which the head noun plays the subject role (e.g., The official who interrogated the councilman…) than those in which the head noun plays the object role (e.g., The official who the councilman interrogated…). A number of explanations have been proposed, some of which predict that the same asymmetry should hold in all languages and others of which predict that the direction of the asymmetry should vary across languages, depending on language properties. Mandarin Chinese provides an opportunity to pit different explanations against one another because it has head-final relative clauses, which helps deconfound some of the proposed explanations, but previous studies of Mandarin have produced rather mixed results. In two reading time studies in Mandarin, we find that 1) object relatives are easier to understand than subject relatives, which is the opposite pattern for English and supports the accounts that predict cross-linguistic differences, 2) it is easier to understand a relative clause whose head noun is omitted, as is allowed in Mandarin, if the sentence contains animacy cues that help disambiguate the sentences and 3) a semantic feature such as animacy contributes to similarity-based interference during sentence comprehension.


Bilingualism: Language and Cognition | 2013

L1 word order and sensitivity to verb bias in L2 processing

Eun-Kyung Lee; Dora Hsin-Yi Lu; Susan M. Garnsey

Using a self-paced reading task, this study examines whether second language (L2) learners are flexible enough to learn L2 parsing strategies that are not useful in their first language (L1). Native Korean-speaking learners of English were compared with native English speakers on resolving a temporary ambiguity about the relationship between a verb and the noun following it (e.g., The student read [that] the article . . .). Consistent with previous studies, native English reading times showed the usual interaction between the optional complementizer that and the particular verbs bias about the structures that can follow it. Lower proficiency L1-Korean learners of L2-English did not show a similar interaction, but higher proficiency learners did. Thus, despite native language word order differences (English: SVO; Korean: SOV) that determine the availability of verbs early enough in sentences to generate predictions about upcoming sentence structure, higher proficiency L1-Korean learners were able to learn to optimally combine verb bias and complementizer cues on-line during sentence comprehension just as native English speakers did, while lower proficiency learners had not yet learned to do so. Optimal interactive cue combination during L2 sentence comprehension can probably be achieved only after sufficient experience with the target language.


Attention Perception & Psychophysics | 2012

Seeing facial motion affects auditory processing in noise

Jaimie L. Gilbert; Charissa R. Lansing; Susan M. Garnsey

Speech perception, especially in noise, may be maximized if the perceiver observes the naturally occurring visual-plus-auditory cues inherent in the production of spoken language. Evidence is conflicting, however, about which aspects of visual information mediate enhanced speech perception in noise. For this reason, we investigated the relative contributions of audibility and the type of visual cue in three experiments in young adults with normal hearing and vision. Relative to static visual cues, access to the talker’s phonetic gestures in speech production, especially in noise, was associated with (a) faster response times and sensitivity for speech understanding in noise, and (b) shorter latencies and reduced amplitudes of auditory N1 event-related potentials. Dynamic chewing facial motion also decreased the N1 latency, but only meaningful linguistic motions reduced the N1 amplitude. The hypothesis that auditory–visual facilitation is distinct to properties of natural, dynamic speech gestures was partially supported.


Language, cognition and neuroscience | 2016

A sheet of coffee: an event-related brain potential study of the processing of classifier-noun sequences in English and Mandarin

Zhiying Qian; Susan M. Garnsey

ABSTRACT Comprehension of classifier-noun sequences was examined in separate studies in English and Mandarin by comparing event-related brain potential (ERP) responses to classifier-noun matches (a sheet of paper) and mismatches (a sheet of coffee) embedded in sentences. One goal was to determine which ERP components are sensitive to such mismatches, as a clue about the nature of the underlying combinatorial processes. Another goal was to examine effects of classifier constraint strength (a piece of …  vs. a sheet of … ) on anticipation of a subsequent noun. Results were similar in the two languages, which is remarkable given substantial differences between them in classifier usage. In both languages, nouns evoked larger N400s in mismatching classifier-noun sequences, suggesting that combinatorial processing was primarily semantic, and general classifiers evoked a larger sustained frontal negativity than specific classifiers starting 200 milliseconds after classifier onset, reflecting effects of constraint strength on anticipation of the upcoming noun.


Journal of Cognitive Neuroscience | 2015

Read my lips: Brain dynamics associated with audiovisual integration and deviance detection

Chun-Yu Tse; Gabriele Gratton; Susan M. Garnsey; Michael A. Novak; Monica Fabiani

Information from different modalities is initially processed in different brain areas, yet real-world perception often requires the integration of multisensory signals into a single percept. An example is the McGurk effect, in which people viewing a speaker whose lip movements do not match the utterance perceive the spoken sounds incorrectly, hearing them as more similar to those signaled by the visual rather than the auditory input. This indicates that audiovisual integration is important for generating the phoneme percept. Here we asked when and where the audiovisual integration process occurs, providing spatial and temporal boundaries for the processes generating phoneme perception. Specifically, we wanted to separate audiovisual integration from other processes, such as simple deviance detection. Building on previous work employing ERPs, we used an oddball paradigm in which task-irrelevant audiovisually deviant stimuli were embedded in strings of non-deviant stimuli. We also recorded the event-related optical signal, an imaging method combining spatial and temporal resolution, to investigate the time course and neuroanatomical substrate of audiovisual integration. We found that audiovisual deviants elicit a short duration response in the middle/superior temporal gyrus, whereas audiovisual integration elicits a more extended response involving also inferior frontal and occipital regions. Interactions between audiovisual integration and deviance detection processes were observed in the posterior/superior temporal gyrus. These data suggest that dynamic interactions between inferior frontal cortex and sensory regions play a significant role in multimodal integration.


Journal of Neurolinguistics | 2015

An ERP study of plural attraction in attachment ambiguity resolution: Evidence for retrieval interference

Eun-Kyung Lee; Susan M. Garnsey

Abstract The memory retrieval processes involved in subject–verb agreement processing in sentence comprehension were examined by recording event-related brain potentials (ERPs) while people read temporarily ambiguous relative clause structures such as The reporter shocked the advisor(s) of the politician(s) who was/were at the meeting . When the relative clause verb was singular, forcing attachment to whichever of the nouns was also singular, the presence of a nearby plural attractor noun created greater interference in the process of retrieving the verbs subject from the memory representation of the sentence so far, revealed by an increased frontal negativity when the attractor noun was plural compared to when it was singular. Crucially, attraction effects did not interact with attachment type. The data suggest that plural attraction effects may arise from retrieval interference in grammatical sentences. The data also suggest that interference from plural attractor nouns in agreement processing is just another instance of the retrieval interference effects that arise whenever it becomes necessary to search the memory representation of a sentence so far to form dependency relationships among the words in the sentence.


Language, cognition and neuroscience | 2018

A comparison of online and offline measures of good-enough processing in garden-path sentences

Zhiying Qian; Susan M. Garnsey; Kiel Christianson

ABSTRACT In two self-paced reading and one ERP experiments, this study tested the good-enough processing account, which states that readers sometimes misinterpret sentences like While the man hunted the deer ran into the woods because they fail to fully revise the syntactic structure [Christianson, K., Hollingworth, A., Halliwell, J. F., & Ferreira, F. (2001). Thematic roles assigned along the garden path linger. Cognitive Psychology, 42, 368–407. doi:10.1006/cogp.2001.0752]. Such an account predicts more evidence of reanalysis at the disambiguation on correctly- than incorrectly-answered trials. Experiment 1, which asked Did the man hunt the deer? and Experiment 2, which asked Did the sentence explicitly say that the man hunted the deer? showed no difference in reading time between trials with correct and incorrect responses. Experiment 3 found the amplitude of P600 was unrelated to comprehension accuracy. These results converged to suggest that failure to reanalyse ambiguous sentences is not the primary reason for misinterpretation. Three norming studies revealed instead response accuracy was influenced by likelihood of events described in the sentences and questions.

Collaboration


Dive into the Susan M. Garnsey's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Susanne Gahl

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chun-Yu Tse

The Chinese University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Greg Carlson

University of Rochester

View shared research outputs
Top Co-Authors

Avatar

Jaimie L. Gilbert

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge