Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Benjamin K. Bergen is active.

Publication


Featured researches published by Benjamin K. Bergen.


Cognitive Science | 2007

Spatial and Linguistic Aspects of Visual Imagery in Sentence Comprehension.

Benjamin K. Bergen; Shane Lindsay; Teenie Matlock; Srini Narayanan

There is mounting evidence that language comprehension involves the activation of mental imagery of the content of utterances (Barsalou, 1999; Bergen, Chang, & Narayan, 2004; Bergen, Narayan, & Feldman, 2003; Narayan, Bergen, & Weinberg, 2004; Richardson, Spivey, McRae, & Barsalou, 2003; Stanfield & Zwaan, 2001; Zwaan, Stanfield, & Yaxley, 2002). This imagery can have motor or perceptual content. Three main questions about the process remain under-explored, however. First, are lexical associations with perception or motion sufficient to yield mental simulation, or is the integration of lexical semantics into larger structures, like sentences, necessary? Second, what linguistic elements (e.g., verbs, nouns, etc.) trigger mental simulations? Third, how detailed are the visual simulations that are performed? A series of behavioral experiments address these questions, using a visual object categorization task to investigate whether up- or down-related language selectively interferes with visual processing in the same part of the visual field (following Richardson et al., 2003). The results demonstrate that either subject nouns or main verbs can trigger visual imagery, but only when used in literal sentences about real space-metaphorical language does not yield significant effects-which implies that it is the comprehension of the sentence as a whole and not simply lexical associations that yields imagery effects. These studies also show that the evoked imagery contains detail as to the part of the visual field where the described scene would take place.


Brain and Language | 2010

Grammatical aspect and mental simulation

Benjamin K. Bergen; Kathryn Wheeler

When processing sentences about perceptible scenes and performable actions, language understanders activate perceptual and motor systems to perform mental simulations of those events. But little is known about exactly what linguistic elements activate modality-specific systems during language processing. While it is known that content words, like nouns and verbs, influence the content of a mental simulation, the role of grammar is less well understood. We investigate the role of grammatical markers in mental simulation through two experiments in which we manipulate the meanings of sentences by modifying the grammatical aspect they use. Using the Action-sentence Compatibility Effect (ACE) methodology [Glenberg, A., Kaschak, M. (2002). Grounding language in action. Psychonomic Bulletin and Review, 9, 558-565], we show that progressive sentences about hand motion facilitate manual action in the same direction, while perfect sentences that are identical in every way except their aspect do not. The broader implication of this finding for language processing is that while content words tell understanders what to mentally simulate and what brain regions to use in performing these simulations, grammatical constructions such as aspect modulate how those simulations are performed.


Quarterly Journal of Experimental Psychology | 2014

Doing arithmetic by hand: Hand movements during exact arithmetic reveal systematic, dynamic spatial processing

Tyler Marghetis; Rafael Núñez; Benjamin K. Bergen

Mathematics requires precise inferences about abstract objects inaccessible to perception. How is this possible? One proposal is that mathematical reasoning, while concerned with entirely abstract objects, nevertheless relies on neural resources specialized for interacting with the world—in other words, mathematics may be grounded in spatial or sensorimotor systems. Mental arithmetic, for instance, could involve shifts in spatial attention along a mental “number-line”, the product of cultural artefacts and practices that systematically spatialize number and arithmetic. Here, we investigate this hypothesized spatial processing during exact, symbolic arithmetic (e.g., 4 + 3 = 7). Participants added and subtracted single-digit numbers and selected the exact solution from responses in the top corners of a computer monitor. While they made their selections using a computer mouse, we recorded the movement of their hand as indexed by the streaming x, y coordinates of the computer mouse cursor. As predicted, hand movements during addition and subtraction were systematically deflected toward the right and the left, respectively, as if calculation involved simultaneously simulating motion along a left-to-right mental number-line. This spatial–arithmetical bias, moreover, was distinct from—but correlated with—individuals’ spatial–numerical biases (i.e., spatial–numerical association of response codes, SNARC, effect). These results are the first evidence that exact, symbolic arithmetic prompts systematic spatial processing associated with mental calculation. We discuss the possibility that mathematical calculation relies, in part, on an integrated system of spatial processes.


Frontiers in Psychology | 2012

Writing Direction Affects How People Map Space Onto Time

Benjamin K. Bergen; Ting Ting Chan Lau

What determines which spatial axis people use to represent time? We investigate effects of writing direction. English, like Mandarin Chinese in mainland China, is written left to right and then top to bottom. But in Taiwan, characters are written predominantly top to bottom and then right to left. Because being a fluent reader–writer entails thousands of hours of experience with eye and hand movement in the direction dictated by one’s writing system, it could be that writing system direction affects the axis used to represent time in terms of space. In a behavioral experiment, we had native speakers of English, Mandarin Chinese from mainland China, and Mandarin Chinese from Taiwan place sets of cards in temporal order. These cards depicted stages of development of plants and animals, for instance: tadpole, froglet, frog. Results showed that English speakers always represented time as moving from left to right (LR). Mainland Chinese participants trended in the same direction, but a small portion laid the cards out from top to bottom. Taiwanese participants were just as likely to depict time as moving from LR as from top to bottom, with a large minority depicting it as moving from right to left. Native writing system affects how people represent time spatially.


Cognitive Linguistics | 2005

The convergent evolution of radial constructions: French and English deictics and existentials

Benjamin K. Bergen; Madelaine C. Plauché

Abstract English deictic and existential there-constructions have been analyzed as constituting a single radial category of form—meaning pairings, related through motivated links, such as metaphor (Lakoff 1987). By comparison, existentials and deictic demonstratives in French make use of two distinct radial categories. The current study analyzes the varied senses of French deictic demonstratives (voilà ‘there is’ and voici ‘here is’) and the existential (il y a ‘there is’). We argue that the syntactic behavior of each of their senses is best explained by the semantic and pragmatic function of that sense, in combination with constraints imposed by its relation to other senses. A cross-linguistic comparison of the deictic demonstrative and existential constructions in French and English supports this claim: despite the different historical origins of these forms in the two languages, they display a strikingly similar array of uses and formal constraints. The parallel evolution of deictics and existentials in these two languages is interpreted as a case of convergent evolution of linguistic items, much like convergent evolution in biological species.


Topics in Cognitive Science | 2016

How Language Programs the Mind

Gary Lupyan; Benjamin K. Bergen

Many animals can be trained to perform novel tasks. People, too, can be trained, but sometime in early childhood people transition from being trainable to something qualitatively more powerful-being programmable. We argue that such programmability constitutes a leap in the way that organisms learn, interact, and transmit knowledge, and that what facilitates or enables this programmability is the learning and use of language. We then examine how language programs the mind and argue that it does so through the manipulation of embodied, sensorimotor representations. The role language plays in controlling mental representations offers important insights for understanding its origin and evolution.


Proceedings of the 6th International Conference (EVOLANG6) | 2006

ON THE EMERGENCE OF COMPOSITIONALITY

Joachim De Beule; Benjamin K. Bergen

Compositionality is a hallmark of human language words and morphemes can be factorially combined to produce a seemingly limitless number of viable strings. This contrasts with nonhuman communication systems, which for the most part are holistic encoding a whole message through a single, gestalt form. Why does every human language adopt a compositional strategy? In this paper, we show that compositional language can arise automatically through grounded communication among populations of communicators. The proposed mechanism is the following: if a holistic and a compositional approach are in competition and if both structured (compositional) and atomic meanings need to be communicated, the holistic strategy becomes less successful as it does not recruit already acquired bits of language. We demonstrate the viability of this explanation through computer simulations in which populations of artificial agents perform a communicative task describing scenes that they have observed. Successful language strategies (that is, those yielding successful transmission of information about a scene) are reinforced while unsuccessful ones are demoted. The simulations show that this reinforcement on the basis of communicative success indeed leads to the dominance of compositional language as long as the fraction of unstructured meaning to be communicated is sufficiently high. Moreover, following Elman (1993), we then show that the same effect can be achieved by, instead of manipulating the world (the fraction of unstructured meaning presented to the agents), letting the agents themselves go through developmental stages. These simulations confirm that simple reinforcement mechanisms applied during communicative interactions can account for the emergence of linguistic compositionality.


Journal of Experimental Psychology: General | 2013

The Crosstalk Hypothesis: Why Language Interferes with Driving.

Benjamin K. Bergen; Nathan Medeiros-Ward; Kathryn Wheeler; Frank A. Drews; David L. Strayer

Performing two cognitive tasks at the same time can degrade performance for either domain-general reasons (e.g., both tasks require attention) or domain-specific reasons (e.g., both tasks require visual working memory). We tested predictions of these two accounts of interference on the task of driving while using language, a naturally occurring dual task. Using language and driving a vehicle use different perceptual and motor skills. As a consequence, a domain-general explanation for interference in this dual task appears most plausible. However, recent evidence from the language processing literature suggests that when people use language with motor content (e.g., language about actions) or visual content (e.g., language about visible objects and events), they engage their motor and perceptual systems in ways specifically reflecting the actions and percepts that the language is about. This raises the possibility that language might interfere with driving for domain-specific reasons when the language has visual or motor content. To test this, we had participants drive a simulated vehicle while simultaneously answering true-false statements that had motor, visual, or abstract content. A domain-general explanation for interference would predict greater distraction in each of these three conditions compared with control, while a domain-specific explanation would predict greater interference in the motor and visual conditions. Both of these predictions were borne out but on different measures of distraction, suggesting that language-driven distraction during driving and dual tasks involving language in general may be the result not only of domain-general causes but also specific interference caused by linguistic content.


Memory & Cognition | 2010

Body part representations in verbal semantics

Benjamin K. Bergen; Ting-Ting Chan Lau; Shweta Narayan; Diana Stojanovic; Kathryn Wheeler

Embodied theories of language propose that word meaning is inextricably tied to—grounded in—mental representations of perceptual, motor, and affective experiences of the world. The four experiments described in this article demonstrate that accessing the meanings of action verbs like smile, punch, and kick requires language understanders to activate modality-specific cognitive representations responsible for performing and perceiving those same actions. The main task used is a word-image matching task, where participants see an action verb and an image depicting an action. Their task is to decide as quickly as possible whether the verb and the image depict the same action. Of critical interest is participants’ behavior when the verb and image do not match, in which case the two actions can use the same effector or different effectors. In Experiment 1, we found that participants took significantly longer to reject a verb-image pair when the actions depicted by the image and denoted by the verb used the same effector than when they used different effectors. Experiment 2 yielded the same result when the order of presentation was reversed, replicating the effect in Cantonese. Experiment 3 replicated the effect in English with a verb-verb near-synonym task, and in Experiment 4, we once again replicated the effect with learners of English as a second language. This robust interference effect, whereby a shared effector slows discrimination, shows that language understanders activate effector-specific neurocognitive representations during both picture perception and action word understanding.


Language and Cognition | 2012

Language comprehenders represent object distance both visually and auditorily

Bodo Winter; Benjamin K. Bergen

Abstract When they process sentences, language comprehenders activate perceptual and motor representations of described scenes. On the “immersed experiencer” account, comprehenders engage motor and perceptual systems to create experiences that someone participating in the described scene would have. We tested two predictions of this view. First, the distance of mentioned objects from the protagonist of a described scene should produce perceptual correlates in mental simulations. And second, mental simulation of perceptual features should be multimodal, like actual perception of such features. In Experiment 1, we found that language about objects at different distances modulated the size of visually simulated objects. In Experiment 2, we found a similar effect for volume in the auditory modality. These experiments lend support to the view that language-driven mental simulation encodes experiencer-specific spatial details. The fact that we obtained similar simulation effects for two different modalities—audition and vision—confirms the multimodal nature of mental simulations during language understanding.

Collaboration


Dive into the Benjamin K. Bergen's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Esther Walker

University of California

View shared research outputs
Top Co-Authors

Avatar

Rafael Núñez

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kathryn Wheeler

University of Hawaii at Manoa

View shared research outputs
Top Co-Authors

Avatar

Shweta Narayan

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Nancy Chang

University of California

View shared research outputs
Top Co-Authors

Avatar

Rose Hendricks

University of California

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge