Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Marcus Johnson is active.

Publication


Featured researches published by Marcus Johnson.


Automatica | 2013

A novel actor-critic-identifier architecture for approximate optimal control of uncertain nonlinear systems

Shubhendu Bhasin; Rushikesh Kamalapurkar; Marcus Johnson; Kyriakos G. Vamvoudakis; Frank L. Lewis; Warren E. Dixon

An online adaptive reinforcement learning-based solution is developed for the infinite-horizon optimal control problem for continuous-time uncertain nonlinear systems. A novel actor-critic-identifier (ACI) is proposed to approximate the Hamilton-Jacobi-Bellman equation using three neural network (NN) structures-actor and critic NNs approximate the optimal control and the optimal value function, respectively, and a robust dynamic neural network identifier asymptotically approximates the uncertain system dynamics. An advantage of using the ACI architecture is that learning by the actor, critic, and identifier is continuous and simultaneous, without requiring knowledge of system drift dynamics. Convergence of the algorithm is analyzed using Lyapunov-based adaptive control methods. A persistence of excitation condition is required to guarantee exponential convergence to a bounded region in the neighborhood of the optimal control and uniformly ultimately bounded (UUB) stability of the closed-loop system. Simulation results demonstrate the performance of the actor-critic-identifier method for approximate optimal control.


Automatica | 2010

Brief paper: Composite adaptive control for Euler-Lagrange systems with additive disturbances

Parag M. Patre; William MacKunis; Marcus Johnson; Warren E. Dixon

In a typical adaptive update law, the rate of adaptation is generally a function of the state feedback error. Ideally, the adaptive update law would also include some feedback of the parameter estimation error. The desire to include some measurable form of the parameter estimation error in the adaptation law resulted in the development of composite adaptive update laws that are functions of a prediction error and the state feedback. In all previous composite adaptive controllers, the formulation of the prediction error is predicated on the critical assumption that the system uncertainty is linear in the uncertain parameters (LP uncertainty). The presence of additive disturbances that are not LP would destroy the prediction error formulation and stability analysis arguments in previous results. In this paper, a new prediction error formulation is constructed through the use of a recently developed Robust Integral of the Sign of the Error (RISE) technique. The contribution of this design and associated stability analysis is that the prediction error can be developed even with disturbances that do not satisfy the LP assumption (e.g., additive bounded disturbances). A composite adaptive controller is developed for a general MIMO Euler-Lagrange system with mixed structured (i.e., LP) and unstructured uncertainties. A Lyapunov-based stability analysis is used to derive sufficient gain conditions under which the proposed controller yields semi-global asymptotic tracking. Experimental results are presented to illustrate the approach.


IEEE Transactions on Control Systems and Technology | 2012

Closed-Loop Neural Network-Based NMES Control for Human Limb Tracking

Nitin Sharma; Chris M. Gregory; Marcus Johnson; Warren E. Dixon

Closed-loop control of skeletal muscle is complicated by the nonlinear muscle force to length and velocity relationships and the inherent unstructured and time-varying uncertainties in available models. Some pure feedback methods have been developed with some success, but the most promising and popular control methods for neuromuscular electrical stimulation (NMES) are neural network (NN)-based methods. Efforts in this paper focus on the use of a NN feedforward controller that is augmented with a continuous robust feedback term to yield an asymptotic result (in lieu of typical uniformly ultimately bounded stability). Specifically, an NN-based controller and Lyapunov-based stability analysis are provided to enable semi-global asymptotic tracking of a desired limb time-varying trajectory (i.e., non-isometric contractions). The developed controller is applied as an amplitude modulated voltage to external electrodes attached to the distal-medial and proximal-lateral portion of the quadriceps femoris muscle group in non-impaired volunteers. The added value of incorporating a NN feedforward term is illustrated through experiments that compare the developed controller with and without the NN feedforward component.


conference on decision and control | 2011

Nonlinear two-player zero-sum game approximate solution using a Policy Iteration algorithm

Marcus Johnson; Shubhendu Bhasin; Warren E. Dixon

An approximate online solution is developed for a two-player zero-sum game subject to continuous-time nonlinear uncertain dynamics and an infinite horizon quadratic cost. A novel actor-critic-identifier (ACI) structure is used to implement the Policy Iteration (PI) algorithm, wherein a robust dynamic neural network (DNN) is used to asymptotically identify the uncertain system, and a critic NN is used to approximate the value function. The weight update laws for the critic NN are generated using a gradient-descent method based on a modified temporal difference error, which is independent of the system dynamics. This method finds approximations of the optimal value function, and the saddle point feedback control policies. These policies are computed using the critic NN and the identifier DNN and guarantee uniformly ultimately bounded (UUB) stability of the closed-loop system. The actor, critic and identifier structures are implemented in real-time, continuously and simultaneously.


conference on decision and control | 2010

A model-free robust policy iteration algorithm for optimal control of nonlinear systems

Shubhendu Bhasin; Marcus Johnson; Warren E. Dixon

An online model-free solution is developed for the infinite-horizon optimal control problem for continuous-time nonlinear systems. A novel actor-critic-identifier (ACI) structure is used to implement the Policy Iteration algorithm, wherein two neural network structures are used - a robust dynamic neural network (DNN) to asymptotically identify the uncertain system with additive disturbances, and a critic NN to approximate the value function. The weight update laws for the critic NN are generated using a gradient-descent method based on a modified temporal difference error, which is independent of the system dynamics. The optimal control law (or the actor) is computed using the critic NN and the identifier DNN. Uniformly ultimately bounded (UUB) stability of the closed-loop system is guaranteed. The actor, critic and identifier structures are implemented in real-time, continuously and simultaneously.


IEEE Transactions on Systems, Man, and Cybernetics | 2013

Adaptive Inverse Optimal Neuromuscular Electrical Stimulation

Qiang Wang; Nitin Sharma; Marcus Johnson; Chris M. Gregory; Warren E. Dixon

Neuromuscular electrical stimulation (NMES) is a prescribed treatment for various neuromuscular disorders, where an electrical stimulus is provided to elicit a muscle contraction. Barriers to the development of NMES controllers exist because the muscle response to an electrical stimulation is nonlinear and the muscle model is uncertain. Efforts in this paper focus on the development of an adaptive inverse optimal NMES controller. The controller yields desired limb trajectory tracking while simultaneously minimizing a cost functional that is positive in the error states and stimulation input. The development of this framework allows tradeoffs to be made between tracking performance and control effort by putting different penalties on error states and control input, depending on the clinical goal or functional task. The controller is examined through a Lyapunov-based analysis. Experiments on able-bodied individuals are provided to demonstrate the performance of the developed controller.


conference on decision and control | 2010

Asymptotic stackelberg optimal control design for an uncertain Euler Lagrange system

Marcus Johnson; Takashi Hiramatsu; Warren E. Dixon

Game theory methods have advanced various disciplines from social science, notably economics and biology, and engineering. Game theory establishes an optimal strategy for multiple players in either a cooperative or noncooperative manner where the objective is to reach an equilibrium state among the players. A Stackelberg game strategy involves a leader and a follower that follow a hierarchy relationship where the leader enforces its strategy on the follower. In this paper, a general framework is developed for feedback control of an Euler Lagrange system using an open-loop Stackelberg differential game. A Robust Integral Sign of the Error (RISE) controller is used to cancel uncertain nonlinearities in the system and a Stackelberg optimal controller is used for stabilization in the presence of uncertainty. A Lyapunov analysis is provided to examine the stability of the developed controller.


IEEE Transactions on Neural Networks | 2015

Approximate

Marcus Johnson; Rushikesh Kamalapurkar; Shubhendu Bhasin; Warren E. Dixon

An approximate online equilibrium solution is developed for an N-player nonzero-sum game subject to continuous-time nonlinear unknown dynamics and an infinite horizon quadratic cost. A novel actor-critic-identifier structure is used, wherein a robust dynamic neural network is used to asymptotically identify the uncertain system with additive disturbances, and a set of critic and actor NNs are used to approximate the value functions and equilibrium policies, respectively. The weight update laws for the actor neural networks (NNs) are generated using a gradient-descent method, and the critic NNs are generated by least square regression, which are both based on the modified Bellman error that is independent of the system dynamics. A Lyapunov-based stability analysis shows that uniformly ultimately bounded tracking is achieved, and a convergence analysis demonstrates that the approximate control policies converge to a neighborhood of the optimal solutions. The actor, critic, and identifier structures are implemented in real time continuously and simultaneously. Simulations on two and three player games illustrate the performance of the developed method.


international symposium on intelligent control | 2008

N

Nitin Sharma; Chris M. Gregory; Marcus Johnson; Warren E. Dixon

Closed-loop control of skeletal muscle is complicated by the nonlinear muscle force to length relationship and the inherent unstructured and time-varying uncertainties in available models. Some pure feedback methods have been developed with some success, but the most promising and popular control methods for neuromuscular electrical stimulation (NMES) are neural network-based methods. Neural networks provide a function approximation of the muscle model, however a function reconstruction error limits the steady-state response of typical controllers (i.e., previous controllers are only uniformly ultimately bounded). Motivated by the desire to obtain improved steady-state performance, efforts in this paper focus on the use of a neural network feedforward controller that is augmented with a continuous robust feedback term to yield an asymptotic result. Specifically, a Lyapunov-based controller and stability analysis are provided to demonstrate semi-global asymptotic tracking (i.e., non-isometric contractions) of a desired time-varying trajectory. Experimental results are provided to demonstrate the performance of the developed controller where NMES is applied through external electrodes attached to the distal-medial and proximal-lateral portion of human quadriceps femoris muscle group.


conference on decision and control | 2012

-Player Nonzero-Sum Game Solution for an Uncertain Continuous Nonlinear System

Kyriakos G. Vamvoudakis; Frank L. Lewis; Marcus Johnson; Warren E. Dixon

This paper presents an online adaptive optimal control algorithm based on policy iteration reinforcement learning techniques to solve the continuous-time Stackelberg games with infinite horizon for linear systems. This adaptive optimal control method finds in real-time approximations of the optimal value and the Stackelberg-equilibrium solution, while also guaranteeing closed-loop stability. The optimal-adaptive algorithm is implemented as a separate actor/critic parametric network approximator structure for every player, and involves simultaneous continuous-time adaptation of the actor/critic networks. Novel tuning algorithms are given for the actor/critic networks. The convergence to the closed-loop Stackelberg equilibrium is proven and stability of the system is also guaranteed. A simulation example shows the effectiveness of the new online algorithm.

Collaboration


Dive into the Marcus Johnson's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Nitin Sharma

University of Pittsburgh

View shared research outputs
Top Co-Authors

Avatar

Shubhendu Bhasin

Indian Institute of Technology Delhi

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Frank L. Lewis

University of Texas at Arlington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge