Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Adam G. Rouse is active.

Publication


Featured researches published by Adam G. Rouse.


Journal of Neural Engineering | 2011

A chronic generalized bi-directional brain–machine interface

Adam G. Rouse; Scott R. Stanslaski; Peng Cong; Randy M. Jensen; Pedram Afshar; D. Ullestad; Rahul Gupta; Gregory F. Molnar; Daniel W. Moran; Timothy J. Denison

A bi-directional neural interface (NI) system was designed and prototyped by incorporating a novel neural recording and processing subsystem into a commercial neural stimulator architecture. The NI system prototype leverages the system infrastructure from an existing neurostimulator to ensure reliable operation in a chronic implantation environment. In addition to providing predicate therapy capabilities, the device adds key elements to facilitate chronic research, such as four channels of electrocortigram/local field potential amplification and spectral analysis, a three-axis accelerometer, algorithm processing, event-based data logging, and wireless telemetry for data uploads and algorithm/configuration updates. The custom-integrated micropower sensor and interface circuits facilitate extended operation in a power-limited device. The prototype underwent significant verification testing to ensure reliability, and meets the requirements for a class CF instrument per IEC-60601 protocols. The ability of the device system to process and aid in classifying brain states was preclinically validated using an in vivo non-human primate model for brain control of a computer cursor (i.e. brain-machine interface or BMI). The primate BMI model was chosen for its ability to quantitatively measure signal decoding performance from brain activity that is similar in both amplitude and spectral content to other biomarkers used to detect disease states (e.g. Parkinsons disease). A key goal of this research prototype is to help broaden the clinical scope and acceptance of NI techniques, particularly real-time brain state detection. These techniques have the potential to be generalized beyond motor prosthesis, and are being explored for unmet needs in other neurological conditions such as movement disorders, stroke and epilepsy.


Neurosurgical Focus | 2009

Evolution of brain-computer interfaces: going beyond classic motor physiology.

Eric C. Leuthardt; Jarod L. Roland; Adam G. Rouse; Daniel W. Moran

The notion that a computer can decode brain signals to infer the intentions of a human and then enact those intentions directly through a machine is becoming a realistic technical possibility. These types of devices are known as brain-computer interfaces (BCIs). The evolution of these neuroprosthetic technologies could have significant implications for patients with motor disabilities by enhancing their ability to interact and communicate with their environment. The cortical physiology most investigated and used for device control has been brain signals from the primary motor cortex. To date, this classic motor physiology has been an effective substrate for demonstrating the potential efficacy of BCI-based control. However, emerging research now stands to further enhance our understanding of the cortical physiology underpinning human intent and provide further signals for more complex brain-derived control. In this review, the authors report the current status of BCIs and detail the emerging research trends that stand to augment clinical applications in the future.


The Journal of Neuroscience | 2013

Cortical Adaptation to a Chronic Micro-Electrocorticographic Brain Computer Interface

Adam G. Rouse; Jordan J. Williams; Jesse J. Wheeler; Daniel W. Moran

Brain–computer interface (BCI) technology decodes neural signals in real time to control external devices. In this study, chronic epidural micro-electrocorticographic recordings were performed over primary motor (M1) and dorsal premotor (PMd) cortex of three macaque monkeys. The differential gamma-band amplitude (75–105 Hz) from two arbitrarily chosen 300 μm electrodes (one located over each cortical area) was used for closed-loop control of a one-dimensional BCI device. Each monkey rapidly learned over a period of days to successfully control the velocity of a computer cursor. While both cortical areas contributed to success on the BCI task, the control signals from M1 were consistently modulated more strongly than those from PMd. Additionally, we observe that gamma-band power during active BCI control is always above resting brain activity. This suggests that purposeful gamma-band modulation is an active process that is obtained through increased cortical activation.


international conference of the ieee engineering in medicine and biology society | 2009

Neural adaptation of epidural electrocorticographic (EECoG) signals during closed-loop brain computer interface (BCI) tasks

Adam G. Rouse; Daniel W. Moran

Invasive BCI studies have classically relied on actual or imagined movements to train their neural decoding algorithms. In this study, non-human primates were required to perform a 2D BCI task using epidural microECoG recordings. The decoding weights and cortical locations of the electrodes used for control were randomly chosen and fixed for a series of daily recording sessions for five days. Over a period of one week, the subjects learned to accurately control a 2D computer cursor through neural adaptation of microECoG signals over “cortical control columns” having diameters on a the order of a few mm. These results suggest that the spatial resolution of microECoG recordings can be increased via neural plasticity.


Journal of Neural Engineering | 2013

Differentiating closed-loop cortical intention from rest: building an asynchronous electrocorticographic BCI

Jordan J. Williams; Adam G. Rouse; Sanitta Thongpang; Justin C. Williams; Daniel W. Moran

OBJECTIVE Recent experiments have shown that electrocorticography (ECoG) can provide robust control signals for a brain-computer interface (BCI). Strategies that attempt to adapt a BCI control algorithm by learning from past trials often assume that the subject is attending to each training trial. Likewise, automatic disabling of movement control would be desirable during resting periods when random brain fluctuations might cause unintended movements of a device. To this end, our goal was to identify ECoG differences that arise between periods of active BCI use and rest. APPROACH We examined spectral differences in multi-channel, epidural micro-ECoG signals recorded from non-human primates when rest periods were interleaved between blocks of an active BCI control task. MAIN RESULTS Post-hoc analyses demonstrated that these states can be decoded accurately on both a trial-by-trial and real-time basis, and this discriminability remains robust over a period of weeks. In addition, high gamma frequencies showed greater modulation with desired movement direction, while lower frequency components demonstrated greater amplitude differences between task and rest periods, suggesting possible specialized BCI roles for these frequencies. SIGNIFICANCE The results presented here provide valuable insight into the neurophysiology of BCI control as well as important considerations toward the design of an asynchronous BCI system.


Frontiers in Systems Neuroscience | 2015

Advancing brain-machine interfaces: moving beyond linear state space models

Adam G. Rouse; Marc H. Schieber

Advances in recent years have dramatically improved output control by Brain-Machine Interfaces (BMIs). Such devices nevertheless remain robotic and limited in their movements compared to normal human motor performance. Most current BMIs rely on transforming recorded neural activity to a linear state space composed of a set number of fixed degrees of freedom. Here we consider a variety of ways in which BMI design might be advanced further by applying non-linear dynamics observed in normal motor behavior. We consider (i) the dynamic range and precision of natural movements, (ii) differences between cortical activity and actual body movement, (iii) kinematic and muscular synergies, and (iv) the implications of large neuronal populations. We advance the hypothesis that a given population of recorded neurons may transmit more useful information than can be captured by a single, linear model across all movement phases and contexts. We argue that incorporating these various non-linear characteristics will be an important next step in advancing BMIs to more closely match natural motor performance.


international conference on robotics and automation | 2016

High Precision Neural Decoding of Complex Movement Trajectories Using Recursive Bayesian Estimation With Dynamic Movement Primitives

Guy Hotson; Ryan J. Smith; Adam G. Rouse; Marc H. Schieber; Nitish V. Thakor; Brock A. Wester

Brain-machine interfaces (BMIs) are a rapidly progressing technology with the potential to restore function to victims of severe paralysis via neural control of robotic systems. Great strides have been made in directly mapping a users cortical activity to control of the individual degrees of freedom of robotic end-effectors. While BMIs have yet to achieve the level of reliability desired for widespread clinical use, environmental sensors (e.g., RGB-D cameras for object detection) and prior knowledge of common movement trajectories hold great potential for improving system performance. Here, we present a novel sensor fusion paradigm for BMIs that capitalizes on information able to be extracted from the environment to greatly improve the performance of control. This was accomplished by using dynamic movement primitives to model the 3-D endpoint trajectories of manipulating various objects. We then used a switching unscented Kalman filter to continuously arbitrate between the 3-D endpoint kinematics predicted by the dynamic movement primitives and control derived from neural signals. We experimentally validated our system by decoding 3-D endpoint trajectories executed by a nonhuman primate manipulating four different objects at various locations. Performance using our system showed a dramatic improvement over using neural signals alone, with median distance between actual and decoded trajectories decreasing from 31.1 to 9.9 cm, and mean correlation increasing from 0.80 to 0.98. Our results indicate that our sensor fusion framework can dramatically increase the fidelity of neural prosthetic trajectory decoding.


The Journal of Neuroscience | 2016

Spatiotemporal Distribution of Location and Object Effects in Primary Motor Cortex Neurons during Reach-to-Grasp

Adam G. Rouse; Marc H. Schieber

Reaching and grasping typically are considered to be spatially separate processes that proceed concurrently in the arm and the hand, respectively. The proximal representation in the primary motor cortex (M1) controls the arm for reaching, while the distal representation controls the hand for grasping. Many studies of M1 activity therefore have focused either on reaching to various locations without grasping different objects, or else on grasping different objects all at the same location. Here, we recorded M1 neurons in the anterior bank and lip of the central sulcus as monkeys performed more naturalistic movements, reaching toward, grasping, and manipulating four different objects in up to eight different locations. We quantified the extent to which variation in firing rates depended on location, on object, and on their interaction—all as a function of time. Activity proceeded largely in two sequential phases: the first related predominantly to the location to which the upper extremity reached, and the second related to the object about to be grasped. Both phases involved activity distributed widely throughout the sampled territory, spanning both the proximal and the distal upper extremity representation in caudal M1. Our findings indicate that naturalistic reaching and grasping, rather than being spatially segregated processes that proceed concurrently, each are spatially distributed processes controlled by caudal M1 in large part sequentially. Rather than neuromuscular processes separated in space but not time, reaching and grasping are separated more in time than in space. SIGNIFICANCE STATEMENT Reaching and grasping typically are viewed as processes that proceed concurrently in the arm and hand, respectively. The arm region in the primary motor cortex (M1) is assumed to control reaching, while the hand region controls grasping. During naturalistic reach–grasp–manipulate movements, we found, however, that neuron activity proceeds largely in two sequential phases, each spanning both arm and hand representations in M1. The first phase is related predominantly to the reach location, and the second is related to the object about to be grasped. Our findings indicate that reaching and grasping are successive aspects of a single movement. Initially the arm and the hand both are projected toward the objects location, and later both are shaped to grasp and manipulate.


international ieee/embs conference on neural engineering | 2011

Validation of chronic implantable neural sensing technology using electrocorticographic (ECoG) based brain machine interfaces

Pedram Afshar; Daniel W. Moran; Adam G. Rouse; Xuan Wei; Tim Denison

This paper describes the use of an electrocorticographic (ECoG) based brain machine interface (BMI) as a validation tool for chronic, embedded neural sensing device. This device is designed for basic science and clinical research in neurological diseases. Using the device in a BMI application allows the comparison of quantifiable validation metrics against off-the-shelf sensing methods, and its signals represent the types of signals expected in the clinical disease state.


Journal of Neurophysiology | 2015

Spatiotemporal distribution of location and object effects in reach-to-grasp kinematics

Adam G. Rouse; Marc H. Schieber

In reaching to grasp an object, the arm transports the hand to the intended location as the hand shapes to grasp the object. Prior studies that tracked arm endpoint and grip aperture have shown that reaching and grasping, while proceeding in parallel, are interdependent to some degree. Other studies of reaching and grasping that have examined the joint angles of all five digits as the hand shapes to grasp various objects have not tracked the joint angles of the arm as well. We, therefore, examined 22 joint angles from the shoulder to the five digits as monkeys reached, grasped, and manipulated in a task that dissociated location and object. We quantified the extent to which each angle varied depending on location, on object, and on their interaction, all as a function of time. Although joint angles varied depending on both location and object beginning early in the movement, an early phase of location effects in joint angles from the shoulder to the digits was followed by a later phase in which object effects predominated at all joint angles distal to the shoulder. Interaction effects were relatively small throughout the reach-to-grasp. Whereas reach trajectory was influenced substantially by the object, grasp shape was comparatively invariant to location. Our observations suggest that neural control of reach-to-grasp may occur largely in two sequential phases: the first determining the location to which the arm transports the hand, and the second shaping the entire upper extremity to grasp and manipulate the object.

Collaboration


Dive into the Adam G. Rouse's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Daniel W. Moran

Washington University in St. Louis

View shared research outputs
Top Co-Authors

Avatar

Jordan J. Williams

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Jesse J. Wheeler

Washington University in St. Louis

View shared research outputs
Top Co-Authors

Avatar

Ryan J. Smith

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar

Nitish V. Thakor

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar

Eric C. Leuthardt

Washington University in St. Louis

View shared research outputs
Top Co-Authors

Avatar

Jarod L. Roland

Washington University in St. Louis

View shared research outputs
Top Co-Authors

Avatar

Justin C. Williams

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Rahul Gupta

West Virginia University

View shared research outputs
Researchain Logo
Decentralizing Knowledge