I.M. Ross
Naval Postgraduate School
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by I.M. Ross.
IEEE Transactions on Automatic Control | 2006
Qi Gong; Wei Kang; I.M. Ross
We consider the optimal control of feedback linearizable dynamical systems subject to mixed state and control constraints. In general, a linearizing feedback control does not minimize the cost function. Such problems arise frequently in astronautical applications where stringent performance requirements demand optimality over feedback linearizing controls. In this paper, we consider a pseudospectral (PS) method to compute optimal controls. We prove that a sequence of solutions to the PS-discretized constrained problem converges to the optimal solution of the continuous-time optimal control problem under mild and numerically verifiable conditions. The spectral coefficients of the state trajectories provide a practical method to verify the convergence of the computed solution. The proposed ideas are illustrated by several numerical examples.
Journal of Guidance Control and Dynamics | 2008
I.M. Ross; Pooya Sekhavat; Andrew J. Fleming; Qi Gong
Typical optimal feedback controls are nonsmooth functions. Nonsmooth controls raise fundamental theoretical problems on the existence and uniqueness of state trajectories. Many of these problems are frequently addressed in control applications through the concept of a Filippov solution. In recent years, the simpler concept of a π solution has emerged as a practical and powerful means to address these theoretical issues. In this paper, we advance the notion of Caratheodory-π- solutions that stem from the equivalence between closed-loop and feedback trajectories. In recognizing that feedback controls are not necessarily closed-form expressions, we develop a sampling theorem that indicates that the Lipschitz constant of the dynamics is a fundamental sampling frequency. These ideas lead to a new set of foundations for achieving feedback wherein optimality principles are interwoven to achieve stability and system performance, whereas the computation of optimal controls is at the level of first principles. We demonstrate these principles by way of pseudospectral methods because these techniques can generate Caratheodory-π solutions at a sufficiently fast sampling rate even when implemented in a MATLAB® environment running on legacy computer hardware. To facilitate an exposition of the proposed ideas to a wide audience, we introduce the core principles only and relegate the intricate details to numerous recent references. These principles are then applied to generate pseudospectral feedback controls for the slew maneuvering of NPSAT1, a spacecraft conceived, designed, and built at the Naval Postgraduate School and scheduled to be launched in fall 2007.
american control conference | 2007
P. Sekhavat; Qi Gong; I.M. Ross
NPSAT1 is a small satellite being built at the Naval Postgraduate School and scheduled to launch in 2007. It primarily employs magnetic sensing and actuation for attitude control. The nature of the in-house fabrication and assembly of the spacecraft requires reliable computational estimation of the difficult-to-measure parameters of the end-product. The inherent nonlinear dynamics of the system makes the observer design a challenging problem. This paper presents the successful implementation of the unscented Kalman filter (UKF) for the spacecraft parameter estimation. Since a three-axis magnetometer is the only sensor onboard, the UKF algorithm also estimates the system orientation and angular velocity. The unit quaternion constraint is enforced by treating the norm of the quaternions as a dummy measurement. Simulations and ground test experimental results show the superior performance of the UKF in spacecraft dual state-parameter estimation.
conference on decision and control | 2006
Qi Gong; I.M. Ross; Wei Kang; Fariba Fahroo
In recent years, a large number of nonlinear optimal control problems have been solved by pseudospectral (PS) methods. In an effort to better understand the PS approach to solving control problems, we present convergence results for problems with mixed state and control constraints. A set of sufficient conditions are proved under which the solution of the discretized optimal control problem converges to the continuous solution. Conditions for the convergence of the duals are described and illustrated. This leads to a clarification of covector mapping theorem and its connections to constraint qualifications
international conference on advanced intelligent mechatronics | 2005
P. Sekhavat; Andrew J. Fleming; I.M. Ross
NPSAT1 is a small satellite being built at the Naval Postgraduate School, and due to launch in January 2006. It uses magnetic actuators and a pitch momentum wheel for attitude control. In this paper, a novel time-optimal sampled-data feedback control algorithm is introduced for closed-loop control of NPSAT1 in the presence of disturbances. The feedback law is not analytically explicit; rather, it is obtained by a rapid re-computation of the open-loop time-optimal control at each update instant. The implementation of the proposed controller is based on a shrinking horizon approach and does not require any advance knowledge of the computation time. Preground-test simulations show that the proposed control scheme performs well in the presence of parameter uncertainties and external disturbance torques
Journal of Guidance Control and Dynamics | 2012
Mark Karpenko; Sagar Bhatt; Nazareth Bedrossian; Andrew J. Fleming; I.M. Ross
This paper describes the design and flight implementation of time-optimal attitude maneuvers performed onboard NASA’s Transition Region and Coronal Explorer spacecraft. Minimum-time reorientation maneuvers have obvious applications for improving the agility of spacecraft systems, yet this type of capability has never before been demonstrated in flight due to the lack of reliable algorithms for generating practical optimal control solutions suitable for flight implementation. Constrained time-optimal maneuvering of a rigid body is studied first, in order to demonstrate the potential for enhancing the performance of the Transition Region and Coronal Explorer spacecraft. Issues related to the experimental flight implementation of time-optimal maneuvers onboard Transition Region and Coronal Explorer are discussed. A description of an optimal control problem that includes practical constraints such as the nonlinear reaction wheel torque-momentum envelope and rate gyro saturation limits is given. The problem is solved using the pseudospectral optimal control theory implemented in the MATLAB® software DIDO. Flight results, presented for a typical large-angle time-optimal reorientation maneuver, show that the maneuvers can be implemented without any modification of the existing spacecraft attitude control system. A clear improvement in spacecraft maneuver performance as compared with conventional eigenaxis maneuvering is demonstrated.
american control conference | 2006
I.M. Ross; Qi Gong; Fariba Fahroo; Wei Kang
Infinite-horizon, nonlinear, optimal, feedback control is one of the fundamental problems in control theory. In this paper we propose a solution for this problem based on recent progress in real-time optimal control. The basic idea is to perform feedback implementations through a domain transformation technique and a Radau based pseudospectral method. Two algorithms are considered: free sampling frequency and fixed sampling frequency. For both algorithms, a theoretical analysis for the stability of the closed-loop system is provided. Numerical simulations with random initial conditions demonstrate the techniques for a flexible robot arm and a benchmark inverted pendulum problem
AIAA Guidance, Navigation, and Control Conference and Exhibit | 2006
I.M. Ross; Pooya Sekhavat; Andrew J. Fleming; Qi Gong; Wei Kang
Suppose optimal open-loop controls could be computed in real time. This implies optimal feedback control. These controls are, typically, nonsmooth. Nonsmooth controls raise fundamental theoretical problems on the existence and uniqueness of feedback solutions. A simple, yet powerful, approach to address these theoretical issues is the concept of a …-solution that is closely linked to the practical implementation of zero-order-hold control sampling. In other words, even traditional feedback controls involve open-loop controls through the process of sampling. In this paper, we advance the notion of
american control conference | 2001
Hui Yan; Fariba Fahroo; I.M. Ross
We develop state feedback control laws for linear time-varying systems with quadratic cost criteria by an indirect Legendre pseudo-spectral method. This method approximates the linear two-point boundary value problem to a system of algebraic equations by way of a differentiation matrix. The algebraic system is solved to generate discrete linear transformations between the states and controls at the Legendre-Gauss-Lobatto points. Since these linear transformations involve simple matrix operations, they can be computed rapidly and efficiently. Two methods are proposed: one that circumvents solving the differential Riccati equation by a discrete solution of the boundary value problem, and the other generates a predictor feedback law without the use of transition matrices. Thus, our methods obviate the need for solving the time-intensive backward integration of the matrix Riccati differential equation or inverting ill-conditioned transition matrices. A numerical example illustrates the techniques and demonstrates the accuracy and efficiency of these controllers.
Journal of Guidance Control and Dynamics | 2008
Fariba Fahroo; I.M. Ross
S OLVING an optimal control problem using a digital computer implies discrete approximations. Since the 1960s, there have been well-documented [1–3] naive applications of Pontryagin’s principle in the discrete domain. Although its incorrect applications continue to this day, the origin of the naivete is quite understandable because one has a reasonable expectation of the validity of Pontryagin’s principle within a discrete domain. That an application of the Hamiltonian minimization condition is not necessarily valid in a discrete domain [1,4] opens up a vast array of questions in theory and computation [2,5]. These questions continue to dominate the meaning and validity of discrete approximations and computational solutions to optimal control problems [6–10]. Among these questions is the convergence of discrete approximations in optimal control. About the year 2000, there were a number of key discoveries on the convergence of discrete approximations [9,11–14]. Among other things, Hager [9] showed, by way of a counterexample, that a convergent Runge–Kutta (RK) method may not converge. This seemingly contradictory result is actually quite simple to explain [10]: long-established convergence results on ordinary differential equations do not necessarily apply for optimal control problems. Thus, an RK method that is convergent for an ordinary differential equation may not converge when applied to an optimal control problem. Not only does this explain the possibility of erroneous results obtained through computation, it also explains why computational optimal control has heretofore been such a difficult problem. The good news is that if a proper RKmethod is used (those developed by Hager [9]), convergence can be assured under a proper set of conditions. Whereas RK methods have a long history of development for ordinary differential equations, pseudospectral (PS) methods have had a relatively short history of development for optimal control. In parallel to Hager’s [9] discovery on RK methods, recent developments [8,15–17] show that the convergence theory for PS approximations in optimal control is sharply different from that used in solving partial differential equations. Furthermore, the convergence theory for PS approximations is also different from the one used in RK approximations to optimal control. A critical examination of convergence of approximations using the new theories developed in recent years has not only begun to reveal the proper computational techniques for solving optimal control problems, it has also exposed the fallacy of long-held tacit assumptions. For instance, Ross [18] showed, by way of a simple counterexample, that an indirect method generates the wrong answer, whereas a direct method generates the correct solution. This counterexample exposed the fallacy of the long-held belief that indirect methods are more accurate than direct methods. In this Note, we show, by way of another counterexample, that the convergence of the costates does not imply convergence of the control. This result appears to have more impact on the convergence of PS approximations in optimal control than the convergence of RK approximations because of the significant differences between the two theories; consequently, we restrict our attention to the impact of this result on PS methods, noting, nonetheless, the generality of this assertion.