Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Gowrishankar Ganesh is active.

Publication


Featured researches published by Gowrishankar Ganesh.


Current Biology | 2016

Dissociable Learning Processes Underlie Human Pain Conditioning

Suyi Zhang; Hiroaki Mano; Gowrishankar Ganesh; Trevor W. Robbins; Benjamin John Seymour

Summary Pavlovian conditioning underlies many aspects of pain behavior, including fear and threat detection [1], escape and avoidance learning [2], and endogenous analgesia [3]. Although a central role for the amygdala is well established [4], both human and animal studies implicate other brain regions in learning, notably ventral striatum and cerebellum [5]. It remains unclear whether these regions make different contributions to a single aversive learning process or represent independent learning mechanisms that interact to generate the expression of pain-related behavior. We designed a human parallel aversive conditioning paradigm in which different Pavlovian visual cues probabilistically predicted thermal pain primarily to either the left or right arm and studied the acquisition of conditioned Pavlovian responses using combined physiological recordings and fMRI. Using computational modeling based on reinforcement learning theory, we found that conditioning involves two distinct types of learning process. First, a non-specific “preparatory” system learns aversive facial expressions and autonomic responses such as skin conductance. The associated learning signals—the learned associability and prediction error—were correlated with fMRI brain responses in amygdala-striatal regions, corresponding to the classic aversive (fear) learning circuit. Second, a specific lateralized system learns “consummatory” limb-withdrawal responses, detectable with electromyography of the arm to which pain is predicted. Its related learned associability was correlated with responses in ipsilateral cerebellar cortex, suggesting a novel computational role for the cerebellum in pain. In conclusion, our results show that the overall phenotype of conditioned pain behavior depends on two dissociable reinforcement learning circuits.


Scientific Reports | 2015

Watching novice action degrades expert motor performance: Causation between action production and outcome prediction of observed actions by humans

Tsuyoshi Ikegami; Gowrishankar Ganesh

Our social skills are critically determined by our ability to understand and appropriately respond to actions performed by others. However despite its obvious importance, the mechanisms enabling action understanding in humans have remained largely unclear. A popular but controversial belief is that parts of the motor system contribute to our ability to understand observed actions. Here, using a novel behavioral paradigm, we investigated this belief by examining a causal relation between action production, and a component of action understanding - outcome prediction, the ability of a person to predict the outcome of observed actions. We asked dart experts to watch novice dart throwers and predict the outcome of their throws. We modulated the feedbacks provided to them, caused a specific improvement in the experts ability to predict watched actions while controlling the other experimental factors, and exhibited that a change (improvement) in their outcome prediction ability results in a progressive and proportional deterioration in the experts own darts performance. This causal relationship supports involvement of the motor system in outcome prediction by humans of actions observed in others.


Neuroscience Research | 2016

The role of functionality in the body model for self-attribution.

Laura Aymerich-Franch; Gowrishankar Ganesh

Bodily self-attribution, the feeling that a body (or parts of it) is owned by me, is a fundamental component of ones self. Previous studies have suggested that, in addition to a necessary multi-sensory stimulation, the sense of body ownership is determined by the body model, a representation of our body in the brain. It is however unclear what features constitute the body representation. To examine this issue, we first briefly review results on embodiment of artificial limbs, whole bodies and virtual avatars to understand the apparent anatomical, volumetric and spatial constraints associated with the sense of ownership toward external entities. We then discuss how considering limb functionality in the body model can provide an integrated explanation for most of the varied embodiment results in literature. We propose that the self-attribution of an entity may be determined, not just by its physical features, but by whether the entity can afford actions that the brain has associated with the limb which it replaces.


advanced robotics and its social impacts | 2015

Embodiment of a humanoid robot is preserved during partial and delayed control

Laura Aymerich-Franch; Damien Petit; Gowrishankar Ganesh; Abderrahmane Kheddar

Humanoid robot surrogates promise a plethora of new applications in the field of disaster management and human robot interactions. However, whole body embodiment for teleoperation or telepresence with mobile robot avatars is yet to be fully explored and understood. In this study we investigated whether partial and delayed control, necessitated by the large degree of freedom of a humanoid system, affects embodiment of a walking humanoid robot surrogate. For this, we asked participants to embody a walking humanoid robot in two conditions, one in which they had no control of its movement, and another in which they could control its direction of walking, but with delays. We utilized an embodiment questionnaire to evaluate the embodiment of the humanoid in each condition. Our results show first person visual feedback and congruent visuo-audio feedback to be sufficient for embodiment of the moving robot. Interestingly, participants reported a sense of agency even when they did not control the robot, and critically the sense of agency and embodiment were not affected by partial and delayed control typical of humanoid robots.


eNeuro | 2017

Shared Mechanisms in the Estimation of Self-Generated Actions and the Prediction of Other’s Actions by Humans

Tsuyoshi Ikegami; Gowrishankar Ganesh

Abstract The question of how humans predict outcomes of observed motor actions by others is a fundamental problem in cognitive and social neuroscience. Previous theoretical studies have suggested that the brain uses parts of the forward model (used to estimate sensory outcomes of self-generated actions) to predict outcomes of observed actions. However, this hypothesis has remained controversial due to the lack of direct experimental evidence. To address this issue, we analyzed the behavior of darts experts in an understanding learning paradigm and utilized computational modeling to examine how outcome prediction of observed actions affected the participants’ ability to estimate their own actions. We recruited darts experts because sports experts are known to have an accurate outcome estimation of their own actions as well as prediction of actions observed in others. We first show that learning to predict the outcomes of observed dart throws deteriorates an expert’s abilities to both produce his own darts actions and estimate the outcome of his own throws (or self-estimation). Next, we introduce a state-space model to explain the trial-by-trial changes in the darts performance and self-estimation through our experiment. The model-based analysis reveals that the change in an expert’s self-estimation is explained only by considering a change in the individual’s forward model, showing that an improvement in an expert’s ability to predict outcomes of observed actions affects the individual’s forward model. These results suggest that parts of the same forward model are utilized in humans to both estimate outcomes of self-generated actions and predict outcomes of observed actions.


Journal of Computer-Mediated Communication | 2017

Object Touch by a Humanoid Robot Avatar Induces Haptic Sensation in the Real Hand.

Laura Aymerich-Franch; Damien Petit; Gowrishankar Ganesh; Abderrahmane Kheddar

Humanoid robot embodiment is a recently developed form of mediated embodiment. In 2 studies, we report and quantify a new haptic (touch) illusion during embodiment of a humanoid robot. Around 60% of the users in our studies reported haptic sensations in their real hand when they observed their robot avatar touching a curtain with its hand. Critically, our study shows for the first time that users can experience haptic sensations from a nonanthropomorphic embodied limb/agent with visual feedback alone (i.e. no haptic feedback provided). The results have important implications for the understanding of the cognitive processes governing mediated embodiment and the design of avatar scenarios.


International Journal of Social Robotics | 2017

Non-human Looking Robot Arms Induce Illusion of Embodiment

Laura Aymerich-Franch; Damien Petit; Gowrishankar Ganesh; Abderrahmane Kheddar

We examine whether non-human looking humanoid robot arms can be perceived as part of one’s own body. In two subsequent experiments, participants experienced high levels of embodiment of a robotic arm that had a blue end effector with no fingers (Experiment 1) and of a robotic arm that ended with a gripper (Experiment 2) when it was stroked synchronously with the real arm. Levels of embodiment were significantly higher than the corresponding asynchronous condition and similar to those reported for a human-looking arm. Additionally, we found that visuo-movement synchronization also induced embodiment of the robot arm and that embodiment was even partially maintained when the robot hand was covered with a blue plastic cover. We conclude that humans are able to experience a strong sense of embodiment towards non-human looking robot arms. The results have important implications for the domains related to robotic embodiment.


human robot interaction | 2018

Humans Can Predict Where Their Partner Would Make a Handover

Saki Kato; Natsuki Yamanobe; Gentiane Venture; Gowrishankar Ganesh

A good understanding of handovers between humans is critical for the development of robots in the service industry. Here investigated the extent to which humans estimate their partners behavior during handovers. We show that, even in the absence of visual feedback, humans modulate their handover location for partners they have just met, and according to their distance from the partner, such that the resulting handover errors are consistently small. Our results suggest that humans can predict each others preferred handover location.


Science Advances | 2018

Utilizing sensory prediction errors for movement intention decoding: A new methodology

Gowrishankar Ganesh; Keigo Nakamura; Supat Saetia; Alejandra Mejia Tobar; Eiichi Yoshida; Hideyuki Ando; Natsue Yoshimura; Yasuharu Koike

A new high-accuracy movement intention decoder using <100 ms of EEG and requiring no training or cognitive loading of users. We propose a new methodology for decoding movement intentions of humans. This methodology is motivated by the well-documented ability of the brain to predict sensory outcomes of self-generated and imagined actions using so-called forward models. We propose to subliminally stimulate the sensory modality corresponding to a user’s intended movement, and decode a user’s movement intention from his electroencephalography (EEG), by decoding for prediction errors—whether the sensory prediction corresponding to a user’s intended movement matches the subliminal sensory stimulation we induce. We tested our proposal in a binary wheelchair turning task in which users thought of turning their wheelchair either left or right. We stimulated their vestibular system subliminally, toward either the left or the right direction, using a galvanic vestibular stimulator and show that the decoding for prediction errors from the EEG can radically improve movement intention decoding performance. We observed an 87.2% median single-trial decoding accuracy across tested participants, with zero user training, within 96 ms of the stimulation, and with no additional cognitive load on the users because the stimulation was subliminal.


international conference on robotics and automation | 2018

Towards Emergence of Tool Use in Robots: Automatic Tool Recognition and Use Without Prior Tool Learning

Keng Peng Tee; Jun Li; Lawrence Tai Pang Chen; Kong Wah Wan; Gowrishankar Ganesh

Collaboration


Dive into the Gowrishankar Ganesh's collaboration.

Top Co-Authors

Avatar

Laura Aymerich-Franch

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Abderrahmane Kheddar

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Damien Petit

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Tsuyoshi Ikegami

National Institute of Information and Communications Technology

View shared research outputs
Top Co-Authors

Avatar

Alejandra Mejia Tobar

Tokyo Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Eiichi Yoshida

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Gentiane Venture

Tokyo University of Agriculture and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hiroaki Mano

National Institute of Information and Communications Technology

View shared research outputs
Top Co-Authors

Avatar

Keigo Nakamura

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge