Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Eun Jung Hwang is active.

Publication


Featured researches published by Eun Jung Hwang.


Annual Review of Psychology | 2010

Cognitive neural prosthetics.

Richard A. Andersen; Eun Jung Hwang; Grant H. Mulliken

The cognitive neural prosthetic (CNP) is a very versatile method for assisting paralyzed patients and patients with amputations. The CNP records the cognitive state of the subject, rather than signals strictly related to motor execution or sensation. We review a number of high-level cortical signals and their application for CNPs, including intention, motor imagery, decision making, forward estimation, executive function, attention, learning, and multi-effector movement planning. CNPs are defined by the cognitive function they extract, not the cortical region from which the signals are recorded. However, some cortical areas may be better than others for particular applications. Signals can also be extracted in parallel from multiple cortical areas using multiple implants, which in many circumstances can increase the range of applications of CNPs. The CNP approach relies on scientific understanding of the neural processes involved in cognition, and many of the decoding algorithms it uses also have parallels to underlying neural circuit functions.


PLOS Biology | 2003

A Gain-Field Encoding of Limb Position and Velocity in the Internal Model of Arm Dynamics

Eun Jung Hwang; Opher Donchin; Maurice A. Smith; Reza Shadmehr

Adaptability of reaching movements depends on a computation in the brain that transforms sensory cues, such as those that indicate the position and velocity of the arm, into motor commands. Theoretical consideration shows that the encoding properties of neural elements implementing this transformation dictate how errors should generalize from one limb position and velocity to another. To estimate how sensory cues are encoded by these neural elements, we designed experiments that quantified spatial generalization in environments where forces depended on both position and velocity of the limb. The patterns of error generalization suggest that the neural elements that compute the transformation encode limb position and velocity in intrinsic coordinates via a gain-field; i.e., the elements have directionally dependent tuning that is modulated monotonically with limb position. The gain-field encoding makes the counterintuitive prediction of hypergeneralization: there should be growing extrapolation beyond the trained workspace. Furthermore, nonmonotonic force patterns should be more difficult to learn than monotonic ones. We confirmed these predictions experimentally.


Journal of Neural Engineering | 2005

Internal Models of Limb Dynamics and the Encoding of Limb State

Eun Jung Hwang; Reza Shadmehr

Studies of reaching suggest that humans adapt to novel arm dynamics by building internal models that transform planned sensory states of the limb, e.g., desired limb position and its derivatives, into motor commands, e.g., joint torques. Earlier work modeled this computation via a population of basis elements and used system identification techniques to estimate the tuning properties of the bases from the patterns of generalization. Here we hypothesized that the neural representation of planned sensory states in the internal model might resemble the signals from the peripheral sensors. These sensors normally encode the limbs actual sensory state in which movement errors occurred. We developed a set of equations based on properties of muscle spindles that estimated spindle discharge as a function of the limbs state during reaching and drawing of circles. We then implemented a simulation of a two-link arm that learned to move in various force fields using these spindle-like bases. The system produced a pattern of adaptation and generalization that accounted for a wide range of previously reported behavioral results. In particular, the bases showed gain-field interactions between encoding of limb position and velocity, very similar to the gain fields inferred from behavioral studies. The poor sensitivity of the bases to limb acceleration predicted behavioral results that were confirmed by experiment. We suggest that the internal model of limb dynamics is computed by the brain with neurons that encode the state of the limb in a manner similar to that expected of muscle spindle afferents.


The Journal of Neuroscience | 2009

Brain Control of Movement Execution Onset Using Local Field Potentials in Posterior Parietal Cortex

Eun Jung Hwang; Richard A. Andersen

The precise control of movement execution onset is essential for safe and autonomous cortical motor prosthetics. A recent study from the parietal reach region (PRR) suggested that the local field potentials (LFPs) in this area might be useful for decoding execution time information because of the striking difference in the LFP spectrum between the plan and execution states (Scherberger et al., 2005). More specifically, the LFP power in the 0–10 Hz band sharply rises while the power in the 20–40 Hz band falls as the state transitions from plan to execution. However, a change of visual stimulus immediately preceded reach onset, raising the possibility that the observed spectral change reflected the visual event instead of the reach onset. Here, we tested this possibility and found that the LFP spectrum change was still time locked to the movement onset in the absence of a visual event in self-paced reaches. Furthermore, we successfully trained the macaque subjects to use the LFP spectrum change as a “go” signal in a closed-loop brain-control task in which the animals only modulated the LFP and did not execute a reach. The execution onset was signaled by the change in the LFP spectrum while the target position of the cursor was controlled by the spike firing rates recorded from the same site. The results corroborate that the LFP spectrum change in PRR is a robust indicator for the movement onset and can be used for control of execution onset in a cortical prosthesis.


Experimental Brain Research | 2006

Dissociable effects of the implicit and explicit memory systems on learning control of reaching

Eun Jung Hwang; Maurice A. Smith; Reza Shadmehr

Adaptive control of reaching depends on internal models that associate states in which the limb experienced a force perturbation with motor commands that can compensate for it. Limb state can be sensed via both vision and proprioception. However, adaptation of reaching in novel dynamics results in generalization in the intrinsic coordinates of the limb, suggesting that the proprioceptive states in which the limb was perturbed dominate representation of limb state. To test this hypothesis, we considered a task where position of the hand during a reach was correlated with patterns of force perturbation. This correlation could be sensed via vision, proprioception, or both. As predicted, when the correlations could be sensed only via proprioception, learning was significantly better as compared to when the correlations could only be sensed through vision. We found that learning with visual correlations resulted in subjects who could verbally describe the patterns of perturbations but this awareness was never observed in subjects who learned the task with only proprioceptive correlations. We manipulated the relative values of the visual and proprioceptive parameters and found that the probability of becoming aware strongly depended on the correlations that subjects could visually observe. In all conditions, aware subjects demonstrated a small but significant advantage in their ability to adapt their motor commands. Proprioceptive correlations produced an internal model that strongly influenced reaching performance yet did not lead to awareness. Visual correlations strongly increased the probability of becoming aware, yet had a much smaller but still significant effect on reaching performance. Therefore, practice resulted in acquisition of both implicit and explicit internal models.


Experimental Brain Research | 2006

Adaptation and generalization in acceleration-dependent force fields

Eun Jung Hwang; Maurice A. Smith; Reza Shadmehr

Any passive rigid inertial object that we hold in our hand, e.g., a tennis racquet, imposes a field of forces on the arm that depends on limb position, velocity, and acceleration. A fundamental characteristic of this field is that the forces due to acceleration and velocity are linearly separable in the intrinsic coordinates of the limb. In order to learn such dynamics with a collection of basis elements, a control system would generalize correctly and therefore perform optimally if the basis elements that were sensitive to limb velocity were not sensitive to acceleration, and vice versa. However, in the mammalian nervous system proprioceptive sensors like muscle spindles encode a nonlinear combination of all components of limb state, with sensitivity to velocity dominating sensitivity to acceleration. Therefore, limb state in the space of proprioception is not linearly separable despite the fact that this separation is a desirable property of control systems that form models of inertial objects. In building internal models of limb dynamics, does the brain use a representation that is optimal for control of inertial objects, or a representation that is closely tied to how peripheral sensors measure limb state? Here we show that in humans, patterns of generalization of reaching movements in acceleration-dependent fields are strongly inconsistent with basis elements that are optimized for control of inertial objects. Unlike a robot controller that models the dynamics of the natural world and represents velocity and acceleration independently, internal models of dynamics that people learn appear to be rooted in the properties of proprioception, nonlinearly responding to the pattern of muscle activation and representing velocity more strongly than acceleration.


Neuron | 2012

Inactivation of the Parietal Reach Region Causes Optic Ataxia, Impairing Reaches but Not Saccades

Eun Jung Hwang; Markus Hauschild; Melanie Wilke; Richard A. Andersen

Lesions in human posterior parietal cortex can cause optic ataxia (OA), in which reaches but not saccades to visual objects are impaired, suggesting separate visuomotor pathways for the two effectors. In monkeys, one potentially crucial area for reach control is the parietal reach region (PRR), in which neurons respond preferentially during reach planning as compared to saccade planning. However, direct causal evidence linking the monkey PRR to the deficits observed in OA is missing. We thus inactivated part of the macaque PRR, in the medial wall of the intraparietal sulcus, and produced the hallmarks of OA, misreaching for peripheral targets but unimpaired saccades. Furthermore, reach errors were larger for the targets preferred by the neural population local to the injection site. These results demonstrate that PRR is causally involved in reach-specific visuomotor pathways, and reach goal disruption in PRR can be a neural basis of OA.


Neuron | 2014

Optic Ataxia: From Balint’s Syndrome to the Parietal Reach Region

Richard A. Andersen; Kristen N. Andersen; Eun Jung Hwang; Markus Hauschild

Optic ataxia is a high-order deficit in reaching to visual goals that occurs with posterior parietal cortex (PPC) lesions. It is a component of Balints syndrome that also includes attentional and gaze disorders. Aspects of optic ataxia are misreaching in the contralesional visual field, difficulty preshaping the hand for grasping, and an inability to correct reaches online. Recent research in nonhuman primates (NHPs) suggests that many aspects of Balints syndrome and optic ataxia are a result of damage to specific functional modules for reaching, saccades, grasp, attention, and state estimation. The deficits from large lesions in humans are probably composite effects from damage to combinations of these functional modules. Interactions between these modules, either within posterior parietal cortex or downstream within frontal cortex, may account for more complex behaviors such as hand-eye coordination and reach-to-grasp.


Journal of Neural Engineering | 2013

The utility of multichannel local field potentials for brain-machine interfaces.

Eun Jung Hwang; Richard A. Andersen

OBJECTIVE Local field potentials (LFPs) that carry information about the subjects motor intention have the potential to serve as a complement or alternative to spike signals for brain-machine interfaces (BMIs). The goal of this study is to assess the utility of LFPs for BMIs by characterizing the largely unknown information coding properties of multichannel LFPs. APPROACH Two monkeys were implanted, each with a 16-channel electrode array, in the parietal reach region where both LFPs and spikes are known to encode the subjects intended reach target. We examined how multichannel LFPs recorded during a reach task jointly carry reach target information, and compared the LFP performance to simultaneously recorded multichannel spikes. MAIN RESULTS LFPs yielded a higher number of channels that were informative about reach targets than spikes. Single channel LFPs provided more accurate target information than single channel spikes. However, LFPs showed significantly larger signal and noise correlations across channels than spikes. Reach target decoders performed worse when using multichannel LFPs than multichannel spikes. The underperformance of multichannel LFPs was mostly due to their larger noise correlation because noise de-correlated multichannel LFPs produced a decoding accuracy comparable to multichannel spikes. Despite the high noise correlation, decoders using LFPs in addition to spikes outperformed decoders using only spikes. SIGNIFICANCE These results demonstrate that multichannel LFPs could effectively complement spikes for BMI applications by yielding more informative channels. The utility of multichannel LFPs may be further augmented if their high noise correlation can be taken into account by decoders.


Current Biology | 2013

Volitional Control of Neural Activity Relies on the Natural Motor Repertoire

Eun Jung Hwang; Paul M. Bailey; Richard A. Andersen

BACKGROUND The results from recent brain-machine interface (BMI) studies suggest that it may be more efficient to use simple arbitrary relationships between individual neuron activity and BMI movements than the complex relationship observed between neuron activity and natural movements. This idea is based on the assumption that individual neurons can be conditioned independently regardless of their natural movement association. RESULTS We tested this assumption in the parietal reach region (PRR), an important candidate area for BMIs in which neurons encode the target location for reaching movements. Monkeys could learn to elicit arbitrarily assigned activity patterns, but the seemingly arbitrary patterns always belonged to the response set for natural reaching movements. Moreover, neurons that are free from conditioning showed correlated responses with the conditioned neurons as if they encoded common reach targets. Thus, learning was accomplished by finding reach targets (intrinsic variable of PRR neurons) for which the natural response of reach planning could approximate the arbitrary patterns. CONCLUSIONS Our results suggest that animals learn to volitionally control single-neuron activity in PRR by preferentially exploring and exploiting their natural movement repertoire. Thus, for optimal performance, BMIs utilizing neural signals in PRR should harness, not disregard, the activity patterns in the natural sensorimotor repertoire.

Collaboration


Dive into the Eun Jung Hwang's collaboration.

Top Co-Authors

Avatar

Richard A. Andersen

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Markus Hauschild

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Reza Shadmehr

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Melanie Wilke

University of Göttingen

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Grant H. Mulliken

McGovern Institute for Brain Research

View shared research outputs
Top Co-Authors

Avatar

Hiroshi Makino

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge