Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Richard Ramsey is active.

Publication


Featured researches published by Richard Ramsey.


The Journal of Neuroscience | 2011

The Control of Mimicry by Eye Contact Is Mediated by Medial Prefrontal Cortex

Yin Wang; Richard Ramsey; Antonia F. de C. Hamilton

Spontaneous mimicry of other peoples actions serves an important social function, enhancing affiliation and social interaction. This mimicry can be subtly modulated by different social contexts. We recently found behavioral evidence that direct eye gaze rapidly and specifically enhances mimicry of intransitive hand movements (Wang et al., 2011). Based on past findings linking medial prefrontal cortex (mPFC) to both eye contact and the control of mimicry, we hypothesized that mPFC might be the neural origin of this behavioral effect. The present study aimed to test this hypothesis. During functional magnetic resonance imaging (fMRI) scanning, 20 human participants performed a simple mimicry or no-mimicry task, as previously described (Wang et al., 2011), with direct gaze present on half of the trials. As predicted, fMRI results showed that performing the task activated mirror systems, while direct gaze and inhibition of the natural tendency to mimic both engaged mPFC. Critically, we found an interaction between mimicry and eye contact in mPFC, superior temporal sulcus (STS) and inferior frontal gyrus. We then used dynamic causal modeling to contrast 12 possible models of information processing in this network. Results supported a model in which eye contact controls mimicry by modulating the connection strength from mPFC to STS. This suggests that mPFC is the originator of the gaze–mimicry interaction and that it modulates sensory input to the mirror system. Thus, our results demonstrate how different components of the social brain work together to on-line control mimicry according to the social context.


Journal of Cognitive Neuroscience | 2013

Seeing it my way or your way: Frontoparietal brain areas sustain viewpoint-independent perspective selection processes

Richard Ramsey; Peter C. Hansen; Ian A. Apperly; Dana Samson

A hallmark of human social interaction is the ability to consider other peoples mental states, such as what they see, believe, or desire. Prior neuroimaging research has predominantly investigated the neural mechanisms involved in computing ones own or another persons perspective and largely ignored the question of perspective selection. That is, which brain regions are engaged in the process of selecting between self and other perspectives? To address this question, the current fMRI study used a behavioral paradigm that required participants to select between competing visual perspectives. We provide two main extensions to current knowledge. First, we demonstrate that brain regions within dorsolateral prefrontal and parietal cortices respond in a viewpoint-independent manner during the selection of task-relevant over task-irrelevant perspectives. More specifically, following the computation of two competing visual perspectives, common regions of frontoparietal cortex are engaged to select ones own viewpoint over anothers as well as select anothers viewpoint over ones own. Second, in the absence of conflict between the content of competing perspectives, we showed a reduced engagement of frontoparietal cortex when judging anothers visual perspective relative to ones own. This latter finding provides the first brain-based evidence for the hypothesis that, in some situations, another persons perspective is automatically and effortlessly computed, and thus, less cognitive control is required to select it over ones own perspective. In doing so, we provide stronger evidence for the claim that we not only automatically compute what other people see but also, in some cases, we compute this even before we are explicitly aware of our own perspective.


NeuroImage | 2010

Understanding actors and object-goals in the human brain.

Richard Ramsey; Antonia F. de C. Hamilton

When another person takes 10 pounds from your hand, it matters if they are a shopkeeper or a robber. That is, the meaning of a simple, goal-directed action can vary depending on the identity of the actors involved. Research examining action understanding has identified an action observation network (AON) that encodes action features such as goals and kinematics. However, it is not yet known how or where the brain links actor identity to action goal. In the present paper, we used a repetition suppression paradigm during functional magnetic resonance imaging (fMRI) to examine the neural representation of actor identity within the context of object-directed actions. Participants watched video clips of two different actors with two different object-goals. Repeated presentation of the same actor suppressed the blood oxygen level-dependent (BOLD) response in fusiform gyrus and occipitotemporal cortex. In contrast, repeated presentation of an action with the same object-goal suppressed the BOLD response throughout the AON. Our data reveal an extended brain network for understanding other people and their everyday actions that go beyond the traditional action observation network.


International journal of sport and exercise psychology | 2008

Exploring a modified conceptualization of imagery direction and golf putting performance

Richard Ramsey; Jennifer Cumming; Martin Edwards

Abstract This study investigated a modified conceptualization of imagery direction and its subsequent effects on golf putting performance. A progression in the directional imagery literature was made by eliminating the need for participants to intentionally create persuasively harmful images as they rarely occur, if at all, in the sporting domain. Thus, we explored a more ecologically valid conceptualization of debilitative imagery and measured the effects on sports performance (golf putting). Seventy five participants were randomly allocated to one of three conditions: (a) facilitative imagery, (b) suppressive imagery (debilitative), or (c) no‐imagery control. After performing imagery, the facilitative imagery group successfully putted significantly more golf balls than the suppressive imagery group. This finding suggests that a non‐persuasive conceptualization of debilitative imagery can result in disparate effects on performance compared to facilitative imagery. In doing so, this adds ecological strength to the imagery direction literature by suggesting debilitative imagery need not be persuasive to influence motor skill performance.


Brain and Cognition | 2010

Incongruent imagery interferes with action initiation

Richard Ramsey; Jennifer Cumming; Daniel Eastough; Martin Edwards

It has been suggested that representing an action through observation and imagery share neural processes with action execution. In support of this view, motor-priming research has shown that observing an action can influence action initiation. However, there is little motor-priming research showing that imagining an action can modulate action initiation. The current study examined whether action imagery could prime subsequent execution of a reach and grasp action. Across two motion analysis tracking experiments, 40 participants grasped an object following congruent or incongruent action imagery. In Experiment 1, movement initiation was faster following congruent compared to incongruent imagery, demonstrating that imagery can prime the initiation of grasping. In Experiment 2, incongruent imagery resulted in slower movement initiation compared to a no-imagery control. These data show that imagining a different action to that which is performed can interfere with action production. We propose that the most likely neural correlates of this interference effect are brain regions that code imagined and executed actions. Further, we outline a plausible mechanistic account of how priming in these brain regions through imagery could play a role in action cognition.


Journal of Cognitive Neuroscience | 2014

The control of automatic imitation based on bottom—up and top—down cues to animacy: Insights from brain and behavior

André Klapper; Richard Ramsey; Daniël H. J. Wigboldus; Emily S. Cross

Humans automatically imitate other peoples actions during social interactions, building rapport and social closeness in the process. Although the behavioral consequences and neural correlates of imitation have been studied extensively, little is known about the neural mechanisms that control imitative tendencies. For example, the degree to which an agent is perceived as human-like influences automatic imitation, but it is not known how perception of animacy influences brain circuits that control imitation. In the current fMRI study, we examined how the perception and belief of animacy influence the control of automatic imitation. Using an imitation–inhibition paradigm that involves suppressing the tendency to imitate an observed action, we manipulated both bottom–up (visual input) and top–down (belief) cues to animacy. Results show divergent patterns of behavioral and neural responses. Behavioral analyses show that automatic imitation is equivalent when one or both cues to animacy are present but reduces when both are absent. By contrast, right TPJ showed sensitivity to the presence of both animacy cues. Thus, we demonstrate that right TPJ is biologically tuned to control imitative tendencies when the observed agent both looks like and is believed to be human. The results suggest that right TPJ may be involved in a specialized capacity to control automatic imitation of human agents, rather than a universal process of conflict management, which would be more consistent with generalist theories of imitative control. Evidence for specialized neural circuitry that “controls” imitation offers new insight into developmental disorders that involve atypical processing of social information, such as autism spectrum disorders.


Social Neuroscience | 2013

Brain systems for visual perspective taking and action perception

Elisabetta Mazzarella; Richard Ramsey; Massimiliano Conson; Antonia F. de C. Hamilton

Taking another persons viewpoint and making sense of their actions are key processes that guide social behavior. Previous neuroimaging investigations have largely studied these processes separately. The current study used functional magnetic resonance imaging to examine how the brain incorporates another persons viewpoint and actions into visual perspective judgments. Participants made a left–right judgment about the location of a target object from their own (egocentric) or an actors visual perspective (altercentric). Actor location varied around a table and the actor was either reaching or not reaching for the target object. Analyses examined brain regions engaged in the egocentric and altercentric tasks, brain regions where response magnitude tracked the orientation of the actor in the scene and brain regions sensitive to the action performed by the actor. The blood oxygen level-dependent (BOLD) response in dorsomedial prefrontal cortex (dmPFC) was sensitive to actor orientation in the altercentric task, whereas the response in right inferior frontal gyrus (IFG) was sensitive to actor orientation in the egocentric task. Thus, dmPFC and right IFG may play distinct but complementary roles in visual perspective taking (VPT). Observation of a reaching actor compared to a non-reaching actor yielded activation in lateral occipitotemporal cortex, regardless of task, showing that these regions are sensitive to body posture independent of social context. By considering how an observed actors location and action influence the neural bases of visual perspective judgments, the current study supports the view that multiple neurocognitive “routes” operate during VPT.


Philosophical Transactions of the Royal Society B | 2016

The shaping of social perception by stimulus and knowledge cues to human animacy.

Emily S. Cross; Richard Ramsey; Roman Liepelt; Wolfgang Prinz; Antonia F. de C. Hamilton

Although robots are becoming an ever-growing presence in society, we do not hold the same expectations for robots as we do for humans, nor do we treat them the same. As such, the ability to recognize cues to human animacy is fundamental for guiding social interactions. We review literature that demonstrates cortical networks associated with person perception, action observation and mentalizing are sensitive to human animacy information. In addition, we show that most prior research has explored stimulus properties of artificial agents (humanness of appearance or motion), with less investigation into knowledge cues (whether an agent is believed to have human or artificial origins). Therefore, currently little is known about the relationship between stimulus and knowledge cues to human animacy in terms of cognitive and brain mechanisms. Using fMRI, an elaborate belief manipulation, and human and robot avatars, we found that knowledge cues to human animacy modulate engagement of person perception and mentalizing networks, while stimulus cues to human animacy had less impact on social brain networks. These findings demonstrate that self–other similarities are not only grounded in physical features but are also shaped by prior knowledge. More broadly, as artificial agents fulfil increasingly social roles, a challenge for roboticists will be to manage the impact of pre-conceived beliefs while optimizing human-like design.


Psychological Research-psychologische Forschung | 2012

Predicting others' actions via grasp and gaze: evidence for distinct brain networks.

Richard Ramsey; Emily S. Cross; Antonia F. de C. Hamilton

During social interactions, how do we predict what other people are going to do next? One view is that we use our own motor experience to simulate and predict other people’s actions. For example, when we see Sally look at a coffee cup or grasp a hammer, our own motor system provides a signal that anticipates her next action. Previous research has typically examined such gaze and grasp-based simulation processes separately, and it is not known whether similar cognitive and brain systems underpin the perception of object-directed gaze and grasp. Here we use functional magnetic resonance imaging to examine to what extent gaze- and grasp-perception rely on common or distinct brain networks. Using a ‘peeping window’ protocol, we controlled what an observed actor could see and grasp. The actor could peep through one window to see if an object was present and reach through a different window to grasp the object. However, the actor could not peep and grasp at the same time. We compared gaze and grasp conditions where an object was present with matched conditions where the object was absent. When participants observed another person gaze at an object, left anterior inferior parietal lobule (aIPL) and parietal operculum showed a greater response than when the object was absent. In contrast, when participants observed the actor grasp an object, premotor, posterior parietal, fusiform and middle occipital brain regions showed a greater response than when the object was absent. These results point towards a division in the neural substrates for different types of motor simulation. We suggest that left aIPL and parietal operculum are involved in a predictive process that signals a future hand interaction with an object based on another person’s eye gaze, whereas a broader set of brain areas, including parts of the action observation network, are engaged during observation of an ongoing object-directed hand action.


Journal of Cognitive Neuroscience | 2011

Eye can see what you want: Posterior intraparietal sulcus encodes the object of an actor's gaze

Richard Ramsey; Emily S. Cross; Antonia F. de C. Hamilton

In a social setting, seeing Sally look at a clock means something different to seeing her gaze longingly at a slice of chocolate cake. In both cases, her eyes and face might be turned rightward, but the information conveyed is markedly different, depending on the object of her gaze. Numerous studies have examined brain systems underlying the perception of gaze direction, but less is known about the neural basis of perceiving gaze shifts to specific objects. During fMRI, participants observed an actor look toward one of two objects, each occupying a distinct location. Video stimuli were sequenced to obtain repetition suppression (RS) for object identity, independent of spatial location. In a control condition, a spotlight highlighted one of the objects, but no actor was present. Observation of the human actors gaze compared with the spotlight engaged frontal, parietal, and temporal cortices, consistent with a broad action observation network. RS for gazed object in the human condition was found in posterior intraparietal sulcus (pIPS). RS for highlighted object in the spotlight condition was found in middle occipital, inferior temporal, medial fusiform gyri, and superior parietal lobule. These results suggest that human pIPS is specifically sensitive to the type object that an observed actor looks at (tool vs. food), irrespective of the observed actors gaze location (left vs. right). A general attention or lower-level object feature processing mechanism cannot account for the findings because a very different response pattern was seen in the spotlight control condition. Our results suggest that, in addition to spatial orienting, human pIPS has an important role in object-centered social orienting.

Collaboration


Dive into the Richard Ramsey's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jennifer Cumming

University of Western Ontario

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Martin Edwards

Université catholique de Louvain

View shared research outputs
Researchain Logo
Decentralizing Knowledge