Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where James B. Rawlings is active.

Publication


Featured researches published by James B. Rawlings.


Automatica | 2000

Survey Constrained model predictive control: Stability and optimality

David Q. Mayne; James B. Rawlings; Christopher V. Rao; Pierre O. M. Scokaert

Model predictive control is a form of control in which the current control action is obtained by solving, at each sampling instant, a finite horizon open-loop optimal control problem, using the current state of the plant as the initial state; the optimization yields an optimal control sequence and the first control in this sequence is applied to the plant. An important advantage of this type of control is its ability to cope with hard constraints on controls and states. It has, therefore, been widely applied in petro-chemical and related industries where satisfaction of constraints is particularly important because efficiency demands operating points on or close to the boundary of the set of admissible states and controls. In this review, we focus on model predictive control of constrained systems, both linear and nonlinear and discuss only briefly model predictive control of unconstrained nonlinear and/or time-varying systems. We concentrate our attention on research dealing with stability and optimality; in these areas the subject has developed, in our opinion, to a stage where it has achieved sufficient maturity to warrant the active interest of researchers in nonlinear control. We distill from an extensive literature essential principles that ensure stability and use these to present a concise characterization of most of the model predictive controllers that have been proposed in the literature. In some cases the finite horizon optimal control problem solved on-line is exactly equivalent to the same problem with an infinite horizon; in other cases it is equivalent to a modified infinite horizon optimal control problem. In both situations, known advantages of infinite horizon optimal control accrue.


IEEE Control Systems Magazine | 2000

Tutorial overview of model predictive control

James B. Rawlings

The paper provides a reasonably accessible and self-contained tutorial exposition on model predictive control (MPC). It is aimed at readers with control expertise, particularly practitioners, who wish to broaden their perspective in the MPC area of control technology. We introduce the concepts, provide a framework in which the critical issues can be expressed and analyzed, and point out how MPC allows practitioners to address the trade-offs that must be considered in implementing a control technology.


IEEE Transactions on Automatic Control | 1999

Suboptimal model predictive control (feasibility implies stability)

Pierre O. M. Scokaert; David Q. Mayne; James B. Rawlings

Practical difficulties involved in implementing stabilizing model predictive control laws for nonlinear systems are well known. Stabilizing formulations of the method normally rely on the assumption that global and exact solutions of nonconvex, nonlinear optimization problems are possible in limited computational time. In the paper, we first establish conditions under which suboptimal model predictive control (MPC) controllers are stabilizing; the conditions are mild holding out the hope that many existing controllers remain stabilizing even if optimality is lost. Second, we present and analyze two suboptimal MPC schemes that are guaranteed to be stabilizing, provided an initial feasible solution is available and for which the computational requirements are more reasonable.


Journal of Chemical Physics | 2002

Approximate simulation of coupled fast and slow reactions for stochastic chemical kinetics

Eric L. Haseltine; James B. Rawlings

Exact methods are available for the simulation of isothermal, well-mixed stochastic chemical kinetics. As increasingly complex physical systems are modeled, however, these methods become difficult to solve because the computational burden scales with the number of reaction events. This paper addresses one aspect of this problem: the case in which reacting species fluctuate by different orders of magnitude. By partitioning the system into subsets of “fast” and “slow” reactions, it is possible to bound the computational load by approximating “fast” reactions either deterministically or as Langevin equations. This paper provides a theoretical background for such approximations and outlines strategies for computing these approximations. Two motivating examples drawn from the fields of particle technology and biotechnology illustrate the accuracy and computational efficiency of these approximations.


IEEE Transactions on Control Systems and Technology | 2008

Distributed MPC Strategies With Application to Power System Automatic Generation Control

Aswin N. Venkat; Ian A. Hiskens; James B. Rawlings; Stephen J. Wright

A distributed model predictive control (MPC) framework, suitable for controlling large-scale networked systems such as power systems, is presented. The overall system is decomposed into subsystems, each with its own MPC controller. These subsystem-based MPCs work iteratively and cooperatively towards satisfying systemwide control objectives. If available computational time allows convergence, the proposed distributed MPC framework achieves performance equivalent to centralized MPC. Furthermore, the distributed MPC algorithm is feasible and closed-loop stable under intermediate termination. Automatic generation control (AGC) provides a practical example for illustrating the efficacy of the proposed distributed MPC framework.


Journal of Optimization Theory and Applications | 1998

Application of interior-point methods to model predictive control

Christopher V. Rao; Stephen J. Wright; James B. Rawlings

We present a structured interior-point method for the efficient solution of the optimal control problem in model predictive control. The cost of this approach is linear in the horizon length, compared with cubic growth for a naive approach. We use a discrete-time Riccati recursion to solve the linear equations efficiently at each iteration of the interior-point method, and show that this recursion is numerically stable. We demonstrate the effectiveness of the approach by applying it to three process control problems.


Automatica | 2001

Brief Constrained linear state estimation-a moving horizon approach

Christopher V. Rao; James B. Rawlings; Jay H. Lee

This article considers moving horizon strategies for constrained linear state estimation. Additional information for estimating state variables from output measurements is often available in the form of inequality constraints on states, noise, and other variables. Formulating a linear state estimation problem with inequality constraints, however, prevents recursive solutions such as Kalman filtering, and, consequently, the estimation problem grows with time as more measurements become available. To bound the problem size, we explore moving horizon strategies for constrained linear state estimation. In this work we discuss some practical and theoretical properties of moving horizon estimation. We derive sufficient conditions for the stability of moving horizon state estimation with linear models subject to constraints on the estimate. We also discuss smoothing strategies for moving horizon estimation. Our framework is solely deterministic.


Archive | 1999

Nonlinear Predictive Control and Moving Horizon Estimation — An Introductory Overview

Frank Allgöwer; T. A. Badgwell; J. S. Qin; James B. Rawlings; Stephen J. Wright

In the past decade model predictive control (MPC) has become a preferred control strategy for a large number of processes. The main reasons for this preference include the ability to handle constraints in an optimal way and the flexible formulation in the time domain. Linear MPC schemes, i.e. MPC schemes for which the prediction is based on a linear description of the plant, are by now routinely used in a number of industrial sectors and the underlying control theoretic problems, like stability, are well studied. Nonlinear model predictive control (NMPC), i.e. MPC based on a nonlinear plant description, has only emerged in the past decade and the number of reported industrial applications is still fairly low. Because of its additional ability to take process nonlinearities into account, expectations on this control methodology are high.


IEEE Transactions on Automatic Control | 1998

Constrained linear quadratic regulation

Pierre O. M. Scokaert; James B. Rawlings

The paper is a contribution to the theory of the infinite-horizon linear quadratic regulator (LQR) problem subject to inequality constraints on the inputs and states, extending an approach first proposed by Sznaier and Damborg (1987). A solution algorithm is presented, which requires solving a finite number of finite-dimensional positive definite quadratic programs. The constrained LQR outlined does not feature the undesirable mismatch between open-loop and closed-loop nominal system trajectories, which is present in the other popular forms of model predictive control (MPC) that can be implemented with a finite quadratic programming algorithm. The constrained LQR is shown to be both optimal and stabilizing. The solution algorithm is guaranteed to terminate in finite time with a computational cost that has a reasonable upper bound compared to the minimal cost for computing the optimal solution. Inherent to the approach is the removal of a tuning parameter, the control horizon, which is present in other MPC approaches and for which no reliable tuning guidelines are available. Two examples are presented that compare constrained LQR and two other popular forms of MPC. The examples demonstrate that constrained LQR achieves significantly better performance than the other forms of MPC on some plants, and the computational cost is not prohibitive for online implementation.


IEEE Transactions on Automatic Control | 2011

A Lyapunov Function for Economic Optimizing Model Predictive Control

Moritz Diehl; Rishi Amrit; James B. Rawlings

Standard model predictive control (MPC) yields an asymptotically stable steady-state solution using the following procedure. Given a dynamic model, a steady state of interest is selected, a stage cost is defined that measures deviation from this selected steady state, the controller cost function is a summation of this stage cost over a time horizon, and the optimal cost is shown to be a Lyapunov function for the closed-loop system. In this technical note, the stage cost is an arbitrary economic objective, which may not depend on a steady state, and the optimal cost is not a Lyapunov function for the closed-loop system. For a class of nonlinear systems and economic stage costs, this technical note constructs a suitable Lyapunov function, and the optimal steady-state solution of the economic stage cost is an asymptotically stable solution of the closed-loop system under economic MPC. Both finite and infinite horizons are treated. The class of nonlinear systems is defined by satisfaction of a strong duality property of the steady-state problem. This class includes linear systems with convex stage costs, generalizing previous stability results and providing a Lyapunov function for economic MPC or MPC with an unreachable setpoint and a linear model. A nonlinear chemical reactor example is provided illustrating these points.

Collaboration


Dive into the James B. Rawlings's collaboration.

Top Co-Authors

Avatar

Stephen J. Wright

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Eric L. Haseltine

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Aswin N. Venkat

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

John W. Eaton

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Edward S. Meadows

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michael J. Risbeck

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Christos T. Maravelias

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Daniel B. Patience

University of Wisconsin-Madison

View shared research outputs
Researchain Logo
Decentralizing Knowledge