Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Christian Klaes is active.

Publication


Featured researches published by Christian Klaes.


Science | 2015

Decoding Motor Imagery from the Posterior Parietal Cortex of a Tetraplegic Human

Tyson Aflalo; Spencer Kellis; Christian Klaes; Brian Lee; Ying Shi; Kelsie Pejsa; Kathleen Shanfield; Stephanie Hayes-Jackson; Mindy Aisen; Christi N. Heck; Charles Y. Liu; Richard A. Andersen

Brain imagination to control external devices Studies in monkeys have implicated the brains posterior parietal cortex in high-level coding of planned and imagined actions. Aflalo et al. implanted two microelectrode arrays in the posterior parietal cortex of a tetraplegic patient (see the Perspective by Pruszynski and Diedrichsen). They asked the patient to imagine various types of limb or eye movements. As predicted, motor imagery involved the same types of neural population activity involved in actual movements, which could potentially be exploited in prosthetic limb control. Science, this issue p. 906; see also p. 860 Neurons in the human posterior parietal cortex encode high-level aspects of imagined movements. [Also see Perspective by Pruszynski and Diedrichsen] Nonhuman primate and human studies have suggested that populations of neurons in the posterior parietal cortex (PPC) may represent high-level aspects of action planning that can be used to control external devices as part of a brain-machine interface. However, there is no direct neuron-recording evidence that human PPC is involved in action planning, and the suitability of these signals for neuroprosthetic control has not been tested. We recorded neural population activity with arrays of microelectrodes implanted in the PPC of a tetraplegic subject. Motor imagery could be decoded from these neural populations, including imagined goals, trajectories, and types of movement. These findings indicate that the PPC of humans represents high-level, cognitive aspects of action and that the PPC can be a rich source for cognitive control signals for neural prosthetics that assist paralyzed patients.


The Journal of Neuroscience | 2009

Implementation of spatial transformation rules for goal-directed reaching via gain modulation in monkey parietal and premotor cortex.

Alexander Gail; Christian Klaes; Stephanie Westendorff

Planning goal-directed movements requires the combination of visuospatial with abstract contextual information. Our sensory environment constrains possible movements to a certain extent. However, contextual information guides proper choice of action in a given situation and allows flexible mapping of sensory instruction cues onto different motor actions. We used anti-reach tasks to test the hypothesis that spatial motor-goal representations in cortical sensorimotor areas are gain modulated by the behavioral context to achieve flexible remapping of spatial cue information onto arbitrary motor goals. We found that gain modulation of neuronal reach goal representations is commonly induced by the behavioral context in individual neurons of both, the parietal reach region (PRR) and the dorsal premotor cortex (PMd). In addition, PRR showed stronger directional selectivity during the planning of a reach toward a directly cued goal (pro-reach) compared with an inferred target (anti-reach). PMd, however, showed stronger overall activity during reaches toward inferred targets compared with directly cued targets. Based on our experimental evidence, we suggest that gain modulation is the computational mechanism underlying the integration of spatial and contextual information for flexible, rule-driven stimulus–response mapping, and thereby forms an important basis of goal-directed behavior. Complementary contextual effects in PRR versus PMd are consistent with the idea that posterior parietal cortex preferentially represents sensory-driven, “automatic” motor goals, whereas frontal sensorimotor areas are stronger engaged in the representation of rule-based, “inferred” motor goals.


Current Biology | 2014

Toward More Versatile and Intuitive Cortical Brain–Machine Interfaces

Richard A. Andersen; Spencer Kellis; Christian Klaes; Tyson Aflalo

Brain-machine interfaces have great potential for the development of neuroprosthetic applications to assist patients suffering from brain injury or neurodegenerative disease. One type of brain-machine interface is a cortical motor prosthetic, which is used to assist paralyzed subjects. Motor prosthetics to date have typically used the motor cortex as a source of neural signals for controlling external devices. The review will focus on several new topics in the arena of cortical prosthetics. These include using: recordings from cortical areas outside motor cortex; local field potentials as a source of recorded signals; somatosensory feedback for more dexterous control of robotics; and new decoding methods that work in concert to form an ecology of decode algorithms. These new advances promise to greatly accelerate the applicability and ease of operation of motor prosthetics.


The Journal of Neuroscience | 2015

Hand Shape Representations in the Human Posterior Parietal Cortex

Christian Klaes; Spencer Kellis; Tyson Aflalo; Brian Lee; Kelsie Pejsa; Kathleen Shanfield; Stephanie Hayes-Jackson; Mindy Aisen; Christi N. Heck; Charles Y. Liu; Richard A. Andersen

Humans shape their hands to grasp, manipulate objects, and to communicate. From nonhuman primate studies, we know that visual and motor properties for grasps can be derived from cells in the posterior parietal cortex (PPC). Are non-grasp-related hand shapes in humans represented similarly? Here we show for the first time how single neurons in the PPC of humans are selective for particular imagined hand shapes independent of graspable objects. We find that motor imagery to shape the hand can be successfully decoded from the PPC by implementing a version of the popular Rock-Paper-Scissors game and its extension Rock-Paper-Scissors-Lizard-Spock. By simultaneous presentation of visual and auditory cues, we can discriminate motor imagery from visual information and show differences in auditory and visual information processing in the PPC. These results also demonstrate that neural signals from human PPC can be used to drive a dexterous cortical neuroprosthesis. SIGNIFICANCE STATEMENT This study shows for the first time hand-shape decoding from human PPC. Unlike nonhuman primate studies in which the visual stimuli are the objects to be grasped, the visually cued hand shapes that we use are independent of the stimuli. Furthermore, we can show that distinct neuronal populations are activated for the visual cue and the imagined hand shape. Additionally we found that auditory and visual stimuli that cue the same hand shape are processed differently in PPC. Early on in a trial, only the visual stimuli and not the auditory stimuli can be decoded. During the later stages of a trial, the motor imagery for a particular hand shape can be decoded for both modalities.


PLOS Computational Biology | 2012

Sensorimotor Learning Biases Choice Behavior: A Learning Neural Field Model for Decision Making

Christian Klaes; Sebastian Schneegans; Gregor Schöner; Alexander Gail

According to a prominent view of sensorimotor processing in primates, selection and specification of possible actions are not sequential operations. Rather, a decision for an action emerges from competition between different movement plans, which are specified and selected in parallel. For action choices which are based on ambiguous sensory input, the frontoparietal sensorimotor areas are considered part of the common underlying neural substrate for selection and specification of action. These areas have been shown capable of encoding alternative spatial motor goals in parallel during movement planning, and show signatures of competitive value-based selection among these goals. Since the same network is also involved in learning sensorimotor associations, competitive action selection (decision making) should not only be driven by the sensory evidence and expected reward in favor of either action, but also by the subjects learning history of different sensorimotor associations. Previous computational models of competitive neural decision making used predefined associations between sensory input and corresponding motor output. Such hard-wiring does not allow modeling of how decisions are influenced by sensorimotor learning or by changing reward contingencies. We present a dynamic neural field model which learns arbitrary sensorimotor associations with a reward-driven Hebbian learning algorithm. We show that the model accurately simulates the dynamics of action selection with different reward contingencies, as observed in monkey cortical recordings, and that it correctly predicted the pattern of choice errors in a control experiment. With our adaptive model we demonstrate how network plasticity, which is required for association learning and adaptation to new reward contingencies, can influence choice behavior. The field model provides an integrated and dynamic account for the operations of sensorimotor integration, working memory and action selection required for decision making in ambiguous choice situations.


systems, man and cybernetics | 2014

A collaborative BCI approach to autonomous control of a prosthetic limb system

Kapil D. Katyal; Matthew S. Johannes; Spencer Kellis; Tyson Aflalo; Christian Klaes; Timothy G. McGee; Matthew P. Para; Ying Shi; Brian Lee; Kelsie Pejsa; Charles Y. Liu; Brock A. Wester; Francesco Tenore; James D. Beaty; Alan D. Ravitz; Richard A. Andersen; Michael P. McLoughlin

Existing brain-computer interface (BCI) control of highly dexterous robotic manipulators and prosthetic devices typically rely solely on neural decode algorithms to determine the users intended motion. Although these approaches have made significant progress in the ability to control high degree of freedom (DOF) manipulators, the ability to perform activities of daily living (ADL) is still an ongoing research endeavor. In this paper, we describe a hybrid system that combines elements of autonomous robotic manipulation with neural decode algorithms to maneuver a highly dexterous robotic manipulator for a reach and grasp task. This system was demonstrated using a human patient with cortical micro-electrode arrays allowing the user to manipulate an object on a table and place it at a desired location. The preliminary results for this system are promising in that it demonstrates the potential to blend robotic control to perform lower level manipulation tasks with neural control that allows the user to focus on higher level tasks thereby reducing the cognitive load and increasing the success rate of performing ADL type activities.


Frontiers in Systems Neuroscience | 2018

Engineering Artificial Somatosensation Through Cortical Stimulation in Humans

Brian Lee; Daniel R. Kramer; Michelle Armenta Salas; Spencer Kellis; David R. Brown; Tatyana Dobreva; Christian Klaes; Christi N. Heck; Charles Y. Liu; Richard A. Andersen

Sensory feedback is a critical aspect of motor control rehabilitation following paralysis or amputation. Current human studies have demonstrated the ability to deliver some of this sensory information via brain-machine interfaces, although further testing is needed to understand the stimulation parameters effect on sensation. Here, we report a systematic evaluation of somatosensory restoration in humans, using cortical stimulation with subdural mini-electrocorticography (mini-ECoG) grids. Nine epilepsy patients undergoing implantation of cortical electrodes for seizure localization were also implanted with a subdural 64-channel mini-ECoG grid over the hand area of the primary somatosensory cortex (S1). We mapped the somatotopic location and size of receptive fields evoked by stimulation of individual channels of the mini-ECoG grid. We determined the effects on perception by varying stimulus parameters of pulse width, current amplitude, and frequency. Finally, a target localization task was used to demonstrate the use of artificial sensation in a behavioral task. We found a replicable somatotopic representation of the hand on the mini-ECoG grid across most subjects during electrical stimulation. The stimulus-evoked sensations were usually of artificial quality, but in some cases were more natural and of a cutaneous or proprioceptive nature. Increases in pulse width, current strength and frequency generally produced similar quality sensations at the same somatotopic location, but with a perception of increased intensity. The subjects produced near perfect performance when using the evoked sensory information in target acquisition tasks. These findings indicate that electrical stimulation of somatosensory cortex through mini-ECoG grids has considerable potential for restoring useful sensation to patients with paralysis and amputation.


BMC Neuroscience | 2011

A neural field model of decision making in the posterior parietal cortex

Christian Klaes; Sebastian Schneegans; Gregor Schöner; Alexander Gail

The process of decision making often involves incomplete information about the outcome of the decision. In order to plan goal-directed reaching, it is necessary to combine sensory information about goal positions with information about the current behavioral context to select an appropriate action. A central role in this process is attributed to the posterior parietal cortex (PPC), which has been associated with value based selection of action and perceptual decision making. As an underlying mechanism, it has been proposed that the selection and specification of possible actions are not two distinct, sequential operations, but that instead the decision for an action emerges from the competition between different movement plans [1]. Here, we present a neural field model [2] to describe the dynamics of action selection in the PPC, developed in parallel with an electrophysiological study in monkeys [3]. The task required rule-based spatial remapping of a motor goal, which was indicated by a spatial cue, depending on a contextual cue. The model can learn the context-dependent remapping task via an implemented Hebbian-style learning rule. It is trained from a prestructured initial state (with default cue-response mapping behavior), using a training procedure that emulates the training procedure of the monkeys. The trained model developed activity patterns and neuronal tunings consistent with the empirical data. We then examined how actions are planned in the absence of an explicit rule, i.e. with no contextual cue. In this case the model showed a decision bias towards one goal (Fig. 1A) or an equal representation of both potential goals (Fig. 1B), depending on the input statistics during training. The model remained susceptible to later experience and changes of the reward schedule. This matches the observations in monkeys performing the same task and it provides an account for the


Neuron | 2011

Choosing goals, not rules: deciding among rule-based action plans.

Christian Klaes; Stephanie Westendorff; Shubhodeep Chakrabarti; Alexander Gail


The Journal of Neuroscience | 2010

The Cortical Timeline for Deciding on Reach Motor Goals

Stephanie Westendorff; Christian Klaes; Alexander Gail

Collaboration


Dive into the Christian Klaes's collaboration.

Top Co-Authors

Avatar

Spencer Kellis

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Richard A. Andersen

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Brian Lee

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tyson Aflalo

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Christi N. Heck

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Kelsie Pejsa

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Ying Shi

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge