Michael Jason Knap
Tennessee State University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Michael Jason Knap.
Applied Mathematics and Computation | 2012
S. Sathananthan; Michael Jason Knap; A. Strong; Lee H. Keel
Abstract A problem of robust state feedback stability and stabilization of nonlinear discrete-time stochastic processes is considered. The linear rate vector of a discrete-time system is perturbed by a nonlinear function that satisfies a quadratic constraint. Our objective is to show how linear constant feedback laws can be formulated to stabilize this type of nonlinear discrete-time systems and, at the same time maximize the bounds on this nonlinear perturbing function which the system can tolerate without becoming unstable. The state dependent diffusion is modeled by a normal sequence of identically independently distributed random variables. The new formulation provides a suitable setting for robust stabilization of nonlinear discrete-time systems where the underlying deterministic systems satisfy the generalized matching conditions. Our method which is based on linear matrix inequalities (LMIs) is distinctive from the existing robust control and absolute stability techniques. Examples are given to demonstrate the obtained results.
Stochastic Analysis and Applications | 2013
S. Sathananthan; Michael Jason Knap; Lee H. Keel
A problem of robust guaranteed cost control of stochastic discrete-time systems with parametric uncertainties under Markovian switching is considered. The control is simultaneously applied to both the random and the deterministic components of the system. The noise (the random) term depends on both the states and the control input. The jump Markovian switching is modeled by a discrete-time Markov chain and the noise or stochastic environmental disturbance is modeled by a sequence of identically independently normally distributed random variables. Using linear matrix inequalities (LMIs) approach, the robust quadratic stochastic stability is obtained. The proposed control law for this quadratic stochastic stabilization result depended on the mode of the system. This control law is developed such that the closed-loop system with a cost function has an upper bound under all admissible parameter uncertainties. The upper bound for the cost function is obtained as a minimization problem. Two numerical examples are given to demonstrate the potential of the proposed techniques and obtained results.
advances in computing and communications | 2010
S. Sathananthan; Michael Jason Knap; Lee H. Keel
A problem of state feedback stabilization of discrete-time stochastic processes under Markovian switching and random diffusion (noise) is considered. The jump Markovian switching is modeled by a discrete-time Markov chain. The control input is simultaneously applied to both the rate vector and the diffusion term. Sufficient conditions based on linear matrix inequalities (LMIs) for stochastic stability is obtained. The robustness results of such stability concept against all admissible uncertainties are also investigated. An example is given to demonstrate the obtained results.
Stochastic Analysis and Applications | 2013
S. Sathananthan; Michael Jason Knap; Lee H. Keel
This article presents a new approach to robust quadratic stabilization of nonlinear stochastic systems. The linear rate vector of a stochastic system is perturbed by a nonlinear function, and this nonlinear function satisfies a quadratic constraint. Our objective is to show how linear constant feedback laws can be formulated to stabilize this type of stochastic systems and, at the same time maximize the bounds on this nonlinear perturbing function which the system can tolerate without becoming unstable. The new formulation provides a suitable setting for robust stabilization of nonlinear stochastic systems where the underlying deterministic systems satisfy the generalized matching conditions. Our sufficient conditions are written in matrix forms, which are determined by solving linear matrix inequalities (LMIs), which have significant computational advantage over any other existing techniques. Examples are given to demonstrate the results.
IEEE Transactions on Automatic Control | 2013
Michael Jason Knap; Lee H. Keel; Shankar P. Bhattacharyya
In this paper, we demonstrate how the classical Gauss-Lucas theorem can aid in the determination of stabilizing controllers for feedback systems. Moreover, we show how to apply the Gauss-Lucas theorem in the construction of new necessary conditions for Schur stability; the application is not limited to Schur stability, however.
IFAC Proceedings Volumes | 2011
S. Sathananthan; Michael Jason Knap; Lee H. Keel
Abstract A problem of H∞-disturbance attenuation for uncertain discrete-time stochastic processes under Markovian switching and random diffusion (noise) subject to norm bounded parameter uncertainties is considered. The jump Markovian switching is modeled by a discrete-time Markov chain. The control input is applied to the rate vector and the state dependent diffusion term is modeled by a normal sequence of identically independently distributed random variables. The objective is to design a switched state feedback control law that guarantees the asymptotic stability of the closed-loop system with disturbance attenuation level γ. The control law is found as a solution of a set of linear matrix inequalities (LMIs).
advances in computing and communications | 2012
Michael Jason Knap; Lee H. Keel; Shankar P. Bhattacharyya
In this technical note, we demonstrate how the classical Gauss-Lucas theorem can aid in the determination of stabilizing controllers for feedback systems. Moreover, we show how to apply the Gauss-Lucas theorem in the construction of new necessary conditions for Schur and Hurwitz stability. The results are used to derive new bounds on the stability margins and performance of control systems.
advances in computing and communications | 2010
Michael Jason Knap; Lee H. Keel; Shankar P. Bhattacharyya
Recently, a computationally efficient method was developed to deal with stability of a control system where the coefficients of the characteristic polynomial are polynomially dependent on the parameters of interest. In this paper, we extend this result to a family of complex polynomials which in turn solves some important performance attainment problems. This is due to the fact that a number of important performance attainment problems (phase margin, H∞ margin, and SPR problem) can be cast as robust stability problems for complex polynomials. The results are illustrated by examples.
Stochastic Analysis and Applications | 2013
S. Sathananthan; Michael Jason Knap; Lee H. Keel
A problem of state feedback stabilization of discrete-time stochastic processes under Markovian switching and random diffusion (noise) is considered. The jump Markovian switching is modeled by a discrete-time Markov chain. The control input is simultaneously applied to both the rate vector and the diffusion term. Sufficient conditions based on linear matrix inequalities (LMIs) for stochastic stability is obtained. The robustness results of such stability concept against all admissible uncertainties are also investigated. An example is given to demonstrate the obtained results.
conference on decision and control | 2010
S. Sathananthan; Michael Jason Knap; Lee H. Keel
This paper presents a new approach to robust quadratic stabilization of nonlinear stochastic systems. The linear rate vector of a stochastic system is perturbed by a nonlinear function that satisfies a quadratic constraint. Our objective is to show how linear constant feedback laws can be formulated to stabilize this type of stochastic systems and, at the same time maximize the bounds on this nonlinear perturbing function which the system can tolerate without becoming unstable. The control input is simultaneously applied to both the rate vector and the diffusion term. The new formulation provides a suitable setting for robust stabilization of nonlinear stochastic systems where the underlying deterministic systems satisfy the generalized matching conditions. Examples are given to demonstrate the results.