Shirley-Ann Rueschemeyer
University of York
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Shirley-Ann Rueschemeyer.
Journal of Cognitive Neuroscience | 2010
Shirley-Ann Rueschemeyer; Daan van Rooij; Oliver Lindemann; Roel M. Willems; Harold Bekkering
Recent research indicates that language processing relies on brain areas dedicated to perception and action. For example, processing words denoting manipulable objects has been shown to activate a fronto-parietal network involved in actual tool use. This is suggested to reflect the knowledge the subject has about how objects are moved and used. However, information about how to use an object may be much more central to the conceptual representation of an object than information about how to move an object. Therefore, there may be much more fine-grained distinctions between objects on the neural level, especially related to the usability of manipulable objects. In the current study, we investigated whether a distinction can be made between words denoting (1) objects that can be picked up to move (e.g., volumetrically manipulable objects: bookend, clock) and (2) objects that must be picked up to use (e.g., functionally manipulable objects: cup, pen). The results show that functionally manipulable words elicit greater levels of activation in the fronto-parietal sensorimotor areas than volumetrically manipulable words. This suggests that indeed a distinction can be made between different types of manipulable objects. Specifically, how an object is used functionally rather than whether an object can be displaced with the hand is reflected in semantic representations in the brain.
NeuroImage | 2010
Henning Holle; Jonas Obleser; Shirley-Ann Rueschemeyer; Thomas C. Gunter
Iconic gestures are spontaneous hand movements that illustrate certain contents of speech and, as such, are an important part of face-to-face communication. This experiment targets the brain bases of how iconic gestures and speech are integrated during comprehension. Areas of integration were identified on the basis of two classic properties of multimodal integration, bimodal enhancement and inverse effectiveness (i.e., greater enhancement for unimodally least effective stimuli). Participants underwent fMRI while being presented with videos of gesture-supported sentences as well as their unimodal components, which allowed us to identify areas showing bimodal enhancement. Additionally, we manipulated the signal-to-noise ratio of speech (either moderate or good) to probe for integration areas exhibiting the inverse effectiveness property. Bimodal enhancement was found at the posterior end of the superior temporal sulcus and adjacent superior temporal gyrus (pSTS/STG) in both hemispheres, indicating that the integration of iconic gestures and speech takes place in these areas. Furthermore, we found that the left pSTS/STG specifically showed a pattern of inverse effectiveness, i.e., the neural enhancement for bimodal stimulation was greater under adverse listening conditions. This indicates that activity in this area is boosted when an iconic gesture accompanies an utterance that is otherwise difficult to comprehend. The neural response paralleled the behavioral data observed. The present data extends results from previous gesture-speech integration studies in showing that pSTS/STG plays a key role in the facilitation of speech comprehension through simultaneous gestural input.
Human Brain Mapping | 2012
Wessel O. van Dam; Margriet van Dijk; Harold Bekkering; Shirley-Ann Rueschemeyer
According to an embodied view of language comprehension, language concepts are grounded in our perceptual systems. Evidence for the idea that concepts are grounded in areas involved in action and perception comes from both behavioral and neuroimaging studies (Glenberg [1997]: Behav Brain Sci 20:1‐55; Barsalou [1999]: Behav Brain Sci 22:577‐660; Pulvermueller [1999]: Behav Brain Sci 22:253‐336; Barsalou et al. [2003]: Trends Cogn Sci 7:84‐91). However, the results from several studies indicate that the activation of information in perception and action areas is not a purely automatic process (Raposo et al. [2009]: Neuropsychologia 47:388‐396; Rueschemeyer et al. [2007]: J Cogn Neurosci 19:855‐865). These findings suggest that embodied representations are flexible. In these studies, flexibility is characterized by the relative presence or absence of activation in our perceptual systems. However, even if the context in which a word is presented does not undermine a motor interpretation, it is possible that the degree to which a modality‐specific region contributes to a representation depends on the context in which conceptual features are retrieved. In the present study, we investigated this issue by presenting word stimuli for which both motor and visual properties (e.g., Tennis ball, Boxing glove) were important in constituting the concept. Conform with the idea that language representations are flexible and context dependent, we demonstrate that the degree to which a modality‐specific region contributes to a representation considerably changes as a function of context. Hum Brain Mapp 33:2322–2333, 2012.
NeuroImage | 2010
Wessel O. van Dam; Shirley-Ann Rueschemeyer; Harold Bekkering
Embodied accounts of language processing suggest that sensorimotor areas, generally dedicated to perception and action, are also involved in the processing and representation of word meaning. Support for such accounts comes from studies showing that language about actions selectively modulates the execution of congruent and incongruent motor responses (e.g., Glenberg & Kaschak, 2002), and from functional neuroimaging studies showing that understanding action-related language recruits sensorimotor brain areas (e.g. Hauk, Johnsrude, & Pulvermueller, 2004). In the current experiment we explored the basis of the neural motor systems involvement in representing words denoting actions. Specifically, we investigated whether the motor systems involvement is modulated by the specificity of the kinematics associated with a word. Previous research in the visual domain indicates that words denoting basic level category members lacking a specific form (e.g., bird) are less richly encoded within visual areas than words denoting subordinate level members (e.g., pelican), for which the visual form is better specified (Gauthier, Anderson, Tarr, Skudlarski, & Gore, 1997). In the present study we extend these findings to the motor domain. Modulation of the BOLD response elicited by verbs denoting a general motor program (e.g., to clean) was compared to modulation elicited by verbs denoting a more specific motor program (e.g., to wipe). Conform with our hypothesis, a region within the bilateral inferior parietal lobule, typically serving the representation of action plans and goals, was sensitive to the specificity of motor programs associated with the action verbs. These findings contribute to the growing body of research on embodied language representations by showing that the concreteness of an action-semantic feature is reflected in the neural response to action verbs.
Experimental Psychology | 2010
Shirley-Ann Rueschemeyer; Oliver Lindemann; Daan van Rooij; Wessel O. van Dam; Harold Bekkering
Embodied theories of language processing suggest that this motor simulation is an automatic and necessary component of meaning representation. If this is the case, then language and action systems should be mutually dependent (i.e., motor activity should selectively modulate processing of words with an action-semantic component). In this paper, we investigate in two experiments whether evidence for mutual dependence can be found using a motor priming paradigm. Specifically, participants performed either an intentional or a passive motor task while processing words denoting manipulable and nonmanipulable objects. The performance rates (Experiment 1) and response latencies (Experiment 2) in a lexical-decision task reveal that participants performing an intentional action were positively affected in the processing of words denoting manipulable objects as compared to nonmanipulable objects. This was not the case if participants performed a secondary passive motor action (Experiment 1) or did not perform a secondary motor task (Experiment 2). The results go beyond previous research showing that language processes involve motor systems to demonstrate that the execution of motor actions has a selective effect on the semantic processing of words. We suggest that intentional actions activate specific parts of the neural motor system, which are also engaged for lexical-semantic processing of action-related words and discuss the beneficial versus inhibitory nature of this relationship. The results provide new insights into the embodiment of language and the bidirectionality of effects between language and action processing.
Journal of Cognitive Neuroscience | 2012
Markus J. van Ackeren; Daniel Casasanto; Harold Bekkering; Peter Hagoort; Shirley-Ann Rueschemeyer
Research from the past decade has shown that understanding the meaning of words and utterances (i.e., abstracted symbols) engages the same systems we used to perceive and interact with the physical world in a content-specific manner. For example, understanding the word “grasp” elicits activation in the cortical motor network, that is, part of the neural substrate involved in planned and executing a grasping action. In the embodied literature, cortical motor activation during language comprehension is thought to reflect motor simulation underlying conceptual knowledge [note that outside the embodied framework, other explanations for the link between action and language are offered, e.g., Mahon, B. Z., & Caramazza, A. A critical look at the embodied cognition hypothesis and a new proposal for grouding conceptual content. Journal of Physiology, 102, 59–70, 2008; Hagoort, P. On Broca, brain, and binding: A new framework. Trends in Cognitive Sciences, 9, 416–423, 2005]. Previous research has supported the view that the coupling between language and action is flexible, and reading an action-related word form is not sufficient for cortical motor activation [Van Dam, W. O., van Dijk, M., Bekkering, H., & Rueschemeyer, S.-A. Flexibility in embodied lexical–semantic representations. Human Brain Mapping, doi: 10.1002/hbm.21365, 2011]. The current study goes one step further by addressing the necessity of action-related word forms for motor activation during language comprehension. Subjects listened to indirect requests (IRs) for action during an fMRI session. IRs for action are speech acts in which access to an action concept is required, although it is not explicitly encoded in the language. For example, the utterance “It is hot here!” in a room with a window is likely to be interpreted as a request to open the window. However, the same utterance in a desert will be interpreted as a statement. The results indicate (1) that comprehension of IR sentences activates cortical motor areas reliably more than comprehension of sentences devoid of any implicit motor information. This is true despite the fact that IR sentences contain no lexical reference to action. (2) Comprehension of IR sentences also reliably activates substantial portions of the theory of mind network, known to be involved in making inferences about mental states of others. The implications of these findings for embodied theories of language are discussed.
Frontiers in Psychology | 2010
Wessel O. van Dam; Shirley-Ann Rueschemeyer; Oliver Lindemann; Harold Bekkering
The embodied view of language comprehension proposes that the meaning of words is grounded in perception and action rather than represented in abstract amodal symbols. Support for embodied theories of language processing comes from behavioral studies showing that understanding a sentence about an action can modulate congruent and incongruent physical responses, suggesting motor involvement during comprehension of sentences referring to bodily movement. Additionally, several neuroimaging studies have provided evidence that comprehending single words denoting manipulable objects elicits specific responses in the neural motor system. An interesting question that remains is whether action semantic knowledge is directly activated as motor simulations in the brain, or rather modulated by the semantic context in which action words are encountered. In the current paper we investigated the nature of conceptual representations using a go/no-go lexical decision task. Specifically, target words were either presented in a semantic context that emphasized dominant action features (features related to the functional use of an object) or non-dominant action features. The response latencies in a lexical decision task reveal that participants were faster to respond to words denoting objects for which the functional use was congruent with the prepared movement. This facilitation effect, however, was only apparent when the semantic context emphasized corresponding motor properties. These findings suggest that conceptual processing is a context-dependent process that incorporates motor-related knowledge in a flexible manner.
Neuropsychologia | 2010
Shirley-Ann Rueschemeyer; Christian Pfeiffer; Harold Bekkering
Words denoting manipulable objects activate sensorimotor brain areas, likely reflecting action experience with the denoted objects. In particular, these sensorimotor lexical representations have been found to reflect the way in which an object is used. In the current paper we present data from two experiments (one behavioral and one neuroimaging) in which we investigate whether body schema information, putatively necessary for interacting with functional objects, is also recruited during lexical processing. To this end, we presented participants with words denoting objects that are typically brought towards or away from the body (e.g., cup or key, respectively). We hypothesized that objects typically brought to a location on the body (e.g., cup) are relatively more reliant on body schema representations, since the final goal location of the cup (i.e., the mouth) is represented primarily through posture and body co-ordinates. In contrast, objects typically brought to a location away from the body (e.g., key) are relatively more dependent on visuo-spatial representations, since the final goal location of the key (i.e., a keyhole) is perceived visually. The behavioral study showed that prior planning of a movement along an axis towards and away from the body facilitates processing of words with a congruent action semantic feature (i.e., preparation of movement towards the body facilitates processing of cup.). In an fMRI study we showed that words denoting objects brought towards the body engage the resources of brain areas involved in the processing information about human bodies (i.e., the extra-striate body area, middle occipital gyrus and inferior parietal lobe) relatively more than words denoting objects typically brought away from the body. The results provide converging evidence that body schema are implicitly activated in processing lexical information.
Neuropsychologia | 2014
Sophie De Grauwe; Roel M. Willems; Shirley-Ann Rueschemeyer; Kristin Lemhöfer; Herbert Schriefers
The involvement of neural motor and sensory systems in the processing of language has so far mainly been studied in native (L1) speakers. In an fMRI experiment, we investigated whether non-native (L2) semantic representations are rich enough to allow for activation in motor and somatosensory brain areas. German learners of Dutch and a control group of Dutch native speakers made lexical decisions about visually presented Dutch motor and non-motor verbs. Region-of-interest (ROI) and whole-brain analyses indicated that L2 speakers, like L1 speakers, showed significantly increased activation for simple motor compared to non-motor verbs in motor and somatosensory regions. This effect was not restricted to Dutch-German cognate verbs, but was also present for non-cognate verbs. These results indicate that L2 semantic representations are rich enough for motor-related activations to develop in motor and somatosensory areas.
Brain and Language | 2015
James Davey; Shirley-Ann Rueschemeyer; Alison Costigan; Nik Murphy; Katya Krieger-Redwood; Glyn Hallam; Elizabeth Jefferies
Highlights • Overlap between semantic control and action understanding revealed with fMRI.• Overlap found in left inferior frontal and posterior middle temporal cortex.• Peaks for action and difficulty were spatially identical in LIFG.• Peaks for action and difficulty were distinct in occipital–temporal cortex.• Difficult trials recruited additional ventral occipital–temporal areas.