Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where David Q. Mayne is active.

Publication


Featured researches published by David Q. Mayne.


Automatica | 2000

Survey Constrained model predictive control: Stability and optimality

David Q. Mayne; James B. Rawlings; Christopher V. Rao; Pierre O. M. Scokaert

Model predictive control is a form of control in which the current control action is obtained by solving, at each sampling instant, a finite horizon open-loop optimal control problem, using the current state of the plant as the initial state; the optimization yields an optimal control sequence and the first control in this sequence is applied to the plant. An important advantage of this type of control is its ability to cope with hard constraints on controls and states. It has, therefore, been widely applied in petro-chemical and related industries where satisfaction of constraints is particularly important because efficiency demands operating points on or close to the boundary of the set of admissible states and controls. In this review, we focus on model predictive control of constrained systems, both linear and nonlinear and discuss only briefly model predictive control of unconstrained nonlinear and/or time-varying systems. We concentrate our attention on research dealing with stability and optimality; in these areas the subject has developed, in our opinion, to a stage where it has achieved sufficient maturity to warrant the active interest of researchers in nonlinear control. We distill from an extensive literature essential principles that ensure stability and use these to present a concise characterization of most of the model predictive controllers that have been proposed in the literature. In some cases the finite horizon optimal control problem solved on-line is exactly equivalent to the same problem with an infinite horizon; in other cases it is equivalent to a modified infinite horizon optimal control problem. In both situations, known advantages of infinite horizon optimal control accrue.


Automatica | 2014

Model predictive control

David Q. Mayne

This paper recalls a few past achievements in Model Predictive Control, gives an overview of some current developments and suggests a few avenues for future research.


IEEE Transactions on Automatic Control | 1993

Robust receding horizon control of constrained nonlinear systems

Hannah Michalska; David Q. Mayne

We present a method for the construction of a robust dual-mode, receding horizon controller which can be employed for a wide class of nonlinear systems with state and control constraints and model error. The controller is dual-mode. In a neighborhood of the origin, the control action is generated by a linear feedback controller designed for the linearized system. Outside this neighborhood, receding horizon control is employed. Existing receding horizon controllers for nonlinear, continuous time systems, which are guaranteed to stabilize the nonlinear system to which they are applied, require the exact solution, at every instant, of an optimal control problem with terminal equality constraints. These requirements are considerably relaxed in the dual-mode receding horizon controller presented in this paper. Stability is achieved by imposing a terminal inequality, rather than an equality, constraint. Only approximate minimization is required. A variable time horizon is permitted. Robustness is achieved by employing conservative state and stability constraint sets, thereby permitting a margin of error. The resultant dual-mode controller requires considerably less online computation than existing receding horizon controllers for nonlinear, constrained systems. >


IEEE Transactions on Automatic Control | 1990

Receding horizon control of nonlinear systems

David Q. Mayne; Hannah Michalska

The receding horizon control strategy provides a relatively simple method for determining feedback control for linear or nonlinear systems. The method is especially useful for the control of slow nonlinear systems, such as chemical batch processes, where it is possible to solve, sequentially, open-loop fixed-horizon, optimal control problems online. The method has been shown to yield a stable closed-loop system when applied to time-invariant or time-varying linear systems. It is shown that the method also yields a stable closed-loop system when applied to nonlinear systems. >


IEEE Transactions on Automatic Control | 1998

Min-max feedback model predictive control for constrained linear systems

Pierre O. M. Scokaert; David Q. Mayne

Min-max feedback formulations of model predictive control are discussed, both in the fixed and variable horizon contexts. The control schemes the authors discuss introduce, in the control optimization, the notion that feedback is present in the receding-horizon implementation of the control. This leads to improved performance, compared to standard model predictive control, and resolves the feasibility difficulties that arise with the min-max techniques that are documented in the literature. The stabilizing properties of the methods are discussed as well as some practical implementation details.


IEEE Transactions on Automatic Control | 1988

Design issues in adaptive control

Richard H. Middleton; Graham C. Goodwin; David J. Hill; David Q. Mayne

An integrated approach to the design of practical adaptive control algorithms is presented. Many existing ideas are brought together, and the effect of various design parameters available to a user is explored. The theory is extended by showing how the problem of stabilizability of the estimated model can be overcome by running parallel estimators. It is shown how asymptotic tracking of deterministic set points can be achieved in the presence of unmodeled dynamics. >


IEEE Transactions on Automatic Control | 1999

Suboptimal model predictive control (feasibility implies stability)

Pierre O. M. Scokaert; David Q. Mayne; James B. Rawlings

Practical difficulties involved in implementing stabilizing model predictive control laws for nonlinear systems are well known. Stabilizing formulations of the method normally rely on the assumption that global and exact solutions of nonconvex, nonlinear optimization problems are possible in limited computational time. In the paper, we first establish conditions under which suboptimal model predictive control (MPC) controllers are stabilizing; the conditions are mild holding out the hope that many existing controllers remain stabilizing even if optimality is lost. Second, we present and analyze two suboptimal MPC schemes that are guaranteed to be stabilizing, provided an initial feasible solution is available and for which the computational requirements are more reasonable.


IEEE Transactions on Automatic Control | 2005

Invariant approximations of the minimal robust positively Invariant set

Sasa V. Rakovic; Eric C. Kerrigan; Konstantinos I. Kouramas; David Q. Mayne

This note provides results on approximating the minimal robust positively invariant (mRPI) set (also known as the 0-reachable set) of an asymptotically stable discrete-time linear time-invariant system. It is assumed that the disturbance is bounded, persistent and acts additively on the state and that the constraints on the disturbance are polyhedral. Results are given that allow for the computation of a robust positively invariant, outer approximation of the mRPI set. Conditions are also given that allow one to a priori specify the accuracy of this approximation.


Automatica | 1987

A parameter estimation perspective of continuous time model reference adaptive control

Graham C. Goodwin; David Q. Mayne

Abstract The problem of adaptive control of continuous time deterministic dynamic systems is re-examined. It is shown that the convergence proofs for these algorithms may be decomposed into “modules” dealing with estimation and control, yielding a “key technical lemma” analogous to that used successfully in the study of discrete time systems. The extra freedom provided by the modular structure is used to formulate existing algorithms in a common framework and to derive several new algorithms. It is also shown how least squares, as opposed to gradient, estimation can be used in continuous time adaptive control.


IEEE Transactions on Automatic Control | 1995

Moving horizon observers and observer-based control

Hannah Michalska; David Q. Mayne

In this paper two topics are explored. A new approach to the problem of obtaining an estimate of the state of a nonlinear system is proposed. The moving horizon observer produces an estimate of the state of the nonlinear system at time t either by minimizing, or approximately minimizing, a cost function over the preceding interval (horizon) [t-T,t]; as t advances, so does the horizon. Convergence of the estimator is established under the assumption that the corresponding global optimization problem can be (approximately) solved and a uniform reconstructability condition is satisfied; the latter condition is automatically satisfied for linear observable systems. The utility of the estimator for receding horizon control is explored. In particular, stability of a composite moving horizon system, comprising a moving horizon regulator and a moving horizon observer, is established. >

Collaboration


Dive into the David Q. Mayne's collaboration.

Top Co-Authors

Avatar

E. Polak

University of California

View shared research outputs
Top Co-Authors

Avatar

Sasa V. Rakovic

Otto-von-Guericke University Magdeburg

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

James B. Rawlings

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Paola Falugi

Imperial College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge