Control of discrete-time nonlinear systems via finite-step control Lyapunov functions
Navid Noroozi, Roman Geiselhart, Lars Grüne, Fabian R. Wirth
CControl of discrete-time nonlinear systems via finite-step control Lyapunov functions
Navid Noroozi a , Roman Geiselhart b , Lars Gr¨une c , Fabian R. Wirth d a Otto-von-Guericke University of Magdeburg, Laboratory for Systems Theory and Automatic Control, 39106 Magdeburg, Germany b University of Ulm, Institute of Measurement, Control and Microtechnology, Albert-Einstein-Allee 41, 89081 Ulm, Germany c University of Bayreuth, Mathematical Institute, Universit¨atsstraße 30, 95440 Bayreuth, Germany d University of Passau, Faculty of Computer Science and Mathematics, Innstraße 33, 94032 Passau, Germany
Abstract
In this work, we establish different control design approaches for discrete-time systems, which build upon the notionof finite-step control Lyapunov functions (fs-CLFs). The design approaches are formulated as optimization problemsand solved in a model predictive control (MPC) fashion. In particular, we establish contractive multi-step MPC withand without reoptimization and compare it to classic MPC. The idea behind these approaches is to use the fs-CLF asrunning cost. These new design approaches are particularly relevant in situations where information exchange betweenplant and controller cannot be ensured at all time instants. An example shows the different behavior of the proposedcontroller design approaches.
Keywords:
Lyapunov methods, model predictive control, discrete-time systems
1. Introduction
Lyapunov functions are a central tool in the contextof nonlinear control theory as they do not only serve ascertificates of stability and simplify stability proofs, butalso provide means to quantify robustness or redesign thecontroller to improve robustness of the feedback connec-tion [1]. This has the drawback that systematic methodsfor obtaining Lyapunov functions for general nonlinear sys-tems still do not exist. In particular, standard Lyapunovfunction candidates, including quadratic, weighted supre-mum norm and weighted 1-norm functions, do not neces-sarily decay at each time step. In contrast to classic Lyapunov functions, so-called finite-step
Lyapunov functions are energy functions whichdo not have to decay at each time step, but only after afixed and finite number of steps. This relaxation leads tosignificant contributions in the context of stability analysisof (large-scale) nonlinear systems [2, 3, 4, 5, 6]. In particu-lar, it has been shown that any proper scaling of a p -normfunction is a finite-step Lyapunov function for a large classof asymptotically stable nonlinear systems [2]. Such con-verse Lyapunov theorems are constructive for control pur-poses in the sense that they provide an explicit way of con-struction of a Lyapunov function for control systems. Thismotivates the use of such results for the controller design innonlinear control systems. In this paper, we generalize thenotion of finite-step Lyapunov functions to control systems ∗ The work of N. Noroozi was supported by the Alexander vonHumboldt Foundation. Here we consider discrete-time systems. A similar conclusionalso holds for continuous-time systems. by introducing the notion of finite-step control
Lyapunovfunctions (fs-CLF).Given a fs-CLF, we reformulate a fs-CLF-based controldesign into an optimization problem. In particular, welink the fs-CLF-based control design to model predictivecontrol (MPC) approaches. By considering three differ-ent optimization setups for the fs-CLF-based design, wecome up with three fs-CLF-based MPC approaches: a)contractive multi-step MPC; b) contractive updated multi-step MPC; and c) classic (i.e. one-step) MPC. In c), wefocus on MPC without terminal constraints and/or costs,see, e.g. [7, Section 7.4] and the references therein for athorough discussion on MPC with or without additionalstabilizing terminal ingredients. In a) and b), the opti-mization problem includes a contractive condition guar-anteeing a decay rate after a finite number of time steps.In all these schemes the running cost in the respective op-timization problem is taken as the fs-CLF. The a prioriknowledge of such a fs-CLF is guaranteed by the converseLyapunov theorem stated as Theorem 9 below.Classic MPC approaches are based on the following phi-losophy: at each time step we measure the current statevalue of the system, optimize a cost over the control inputusing model based predictions of the system response overa fixed optimization horizon, implement the first compo-nent of the computed control sequence, and repeat thesesteps ad infinitum [7]. However, in practice, the controllerand the plant may not communicate with each other ateach time step. In networked control systems applications,multiple (physically decoupled) plants often need to sharea communication channel for exchanging information withtheir corresponding remotely located controllers; see Fig-
Preprint submitted to Elsevier August 27, 2019 a r X i v : . [ m a t h . D S ] A ug P u ˆ y ˆ u y C P u ˆ y ˆ u y C M P M u M ˆ y M ˆ u M y M N Figure 1: The implementation of M independent control loops overa single communication channel N . ure 1. Therefore only a few plants can exchange infor-mation with their controllers at any instant of time andthe remaining plants operate in open-loop until they aregranted access to the communication channel. The allo-cation (also known as scheduling) of communication re-sources is frequently performed in a periodic fashion. Insuch a situation, we need to develop a control setting inwhich the controller (if possible) sends not only one compo-nent, but a control sequence of the length equal to the pe-riodicity of the allocation process at each transmission in-stant; see [8, 9] for such networked control systems config-urations. Such a scenario motivates our contractive multi-step MPC, where the MPC does not communicate withthe plant at each time step, but only after a fixed numberof time steps. The whole optimal control sequence is sentto the plant to compensate for the lack of access to the net-work. To guarantee the stability of the resulting system,we include a contractive constraint, which is obtained fromthe corresponding fs-CLF, into the optimization problemas an inequality constraint.Another problem with the classic implementation ofMPC is that the execution of the optimization problemat each time step may result in high computational cost(here we assume the controller and the plant can commu-nicate at each time step). To keep the computational costlow, inspired by [10, 11], the second control scheme pro-poses an updating approach based on re-optimizations on shrinking horizons which are computationally less expen-sive than re-optimizations on the full horizon in classicMPC schemes. Similar to the first scheme, a contrac-tive constraint is also used in the optimization problemto ensure the stability of the overall system. Finally, thethird control scheme proposes a classic MPC approach inwhich the optimization problem is solved over a fixed op-timization horizon at each time step; and hence only thefirst component of the computed control sequence is ap-plied to the plant. Moreover, the contraction conditionis not considered as an additional inequality constraint inthe optimization problem. The absence of the contractiveconstraint reduces the computational complexity, though considering a fixed optimization horizon will increase thecomputational burden.The MPC schemes we are proposing have similaritieswith MPC schemes known from the literature. Particu-larly, the scheme from Algorithm 11, in which the wholeopen loop optimal control sequence is used, is an in-stance of a contractive MPC scheme, as investigated, e.g.,in [12, 13, 14, 15]. The contractivity constraint in Prob-lem 10 can be seen as a nonlinear version of the respectivecondition in [13, 14, 15]. Actually, under suitable condi-tions the explicit use of a contractive constraint can bereplaced by a term in the cost functional with sufficientlyhigh weight, see [16, Theorem 3.18] or [12]. The paper [12]already mentions the possibility to use a fs-CLF in con-tractive MPC. Contractivity assumptions have also beenused in MPC schemes with additional terminal constraints,see [17, 18]. The updating technique with shrinking hori-zon in Algorithm 15 was inspired by [10, 11], where a theo-retical robustness analysis of this method is performed. Fi-nally, the MPC Algorithm 19 without terminal constraintsis classical and the particular stability analysis in Theorem22 uses techniques from [19, 20] (see also [7, Section 6]),which in turn can be seen as a refinement of earlier, similarapproaches in [21, 22, 23].Throughout this paper we consider a stabilization prob-lem with respect to a closed (not necessarily compact) set.This treatment enables us to formulate several stabiliza-tion problems in a unified manner.This paper is organized as follows: First, relevant nota-tion is recalled in Section 2. Then the notion of fs-CLFstogether with some other relevant notions are introducedin Section 3. The fs-CLF-based MPC schemes are devel-oped in Section 4. Section 5 concludes the paper.
2. Notation
In this paper, R ≥ ( R > ) and N ( N ∗ ) denote the nonnega-tive (positive) real numbers and the nonnegative (positive)integers, respectively. For a set S ⊆ R n , int( S ) and co( S ),respectively, denote the interior and the convex hull of S .Given S ⊆ R n , S (cid:96) := S × · · · × S (cid:124) (cid:123)(cid:122) (cid:125) (cid:96) times is the (cid:96) -fold Cartesianproduct. The i th component of v ∈ R n is denoted by v i .For any x ∈ R n , x (cid:62) denotes its transpose. We write ( x, y )to represent [ x (cid:62) , y (cid:62) ] (cid:62) for x ∈ R n , y ∈ R p . For x ∈ R n , we,respectively, denote the Euclidean norm and the maximumnorm by | x | and by | x | ∞ . Given a nonempty set A ⊂ R n and any point x ∈ R n , we denote | x | A := inf y ∈A | x − y | . Afunction ρ : R ≥ → R ≥ is positive definite if it is con-tinuous, zero at zero and positive otherwise. A positivedefinite function α is of class- K ( α ∈ K ) if it is zero atzero and strictly increasing. It is of class- K ∞ ( α ∈ K ∞ )if α ∈ K and also α ( s ) → ∞ if s → ∞ . A continuousfunction β : R ≥ × R ≥ → R ≥ is of class- KL ( β ∈ KL ),if for each s ≥ β ( · , s ) ∈ K , and for each r ≥ β ( r, · )is decreasing with β ( r, s ) → s → ∞ . The interested2eader is referred to [24] for more details about comparisonfunctions. The identity function is denoted by id. Compo-sition of functions is denoted by the symbol ◦ and repeatedcomposition of, e.g., a function γ by γ i . For positive def-inite functions α, γ we write α < γ if α ( s ) < γ ( s ) for all s >
3. Preliminaries
We first introduce the notions of admissible finite-stepfeedback control laws and fs-CLFs. We then show that ourdefinition implies that an admissible finite-step feedbackcontrol law generated by a fs-CLF stabilizes the system ofinterest. The idea how to construct a fs-CLF is given bya converse Lyapunov theorem.
Consider the discrete-time system x ( t + 1) = g (cid:0) x ( t ) , u ( t ) (cid:1) , t ∈ N (1)with state x ∈ X ⊆ R n and control input u ∈ U ⊆ R m .We assume g : X × U → R n is continuous. Moreover, weassume that g is K -bounded on ( X , U ) as defined below. Definition 1.
A continuous and nonnegative ω : R n → R ≥ is called a measurement function , if the preimage of satisfies ω − (0) (cid:54) = ∅ . Definition 2.
Consider system (1) . Given measurementfunctions ω : R n → R ≥ and ω : R m → R ≥ , we call g K -bounded on ( X , U ) with respect to ( ω , ω ) if there exist κ i ∈ K , i = 1 , such that ω ( g ( ξ, µ )) ≤ κ ( ω ( ξ )) + κ ( ω ( µ )) for all ξ ∈ X and all µ ∈ U . The concept of K -boundedness was introduced in [2] forthe case when ω i ( · ) = |·| . Extensions to K -boundednesswith respect to one (resp. two measurement functions) aregiven in [6] (resp. [25]). Here we extend this concept tothe constraint sets X and U . Frequently, ω will be takenas a norm. Note that in the classic case ω i ( · ) = |·| , K -boundedness is equivalent to continuity of g in the originand boundedness of g on bounded sets, see [25, Lemma 5].Thus, any closed-loop system, consisting of a continuousplant controlled by an optimization-based or quantizedcontroller, is K -bounded. We note that K -boundednessis a necessary condition for input-to-state stability, see [5,Remark 3.3].Let u = ( u (0) , u (1) , . . . ) denote a possibly infinite con-trol sequence for system (1), where u ( i ) ∈ U for all i = 0 , , . . . . If we only study trajectories of (1) over afinite horizon, we might restrict to finite control sequencesdenoted by u k := ( u (0) , . . . , u ( k − ∈ U k . Given acontrol sequence u and an initial value ξ ∈ X , the corre-sponding solution to (1) is denoted by x ( · , ξ, u ( · )), also thenotation x ( · , ξ, u ) or x ( · ) will be used. We require some notation to state the definitions be-low. Let M ∈ N ∗ be fixed. For ξ ∈ X and u M =( u (0) , . . . , u ( M − ∈ U M we define g ( ξ, u M ) := g ( ξ, u (0))and inductively, for j = 1 , . . . , M − g j +1 ( ξ, u M ) := g (cid:0) g j ( ξ, u M ) , u ( j ) (cid:1) . We note that strictly speaking g j is only a function of ξ and ( u (0) , . . . , u ( j − q : X → U M . We wish tointerpret q as a feedback evaluated every M steps. Givenan initial condition x (0) = ξ ∈ X , the feedback q deter-mines a closed-loop trajectory x q of (1) as follows. For j = 0 , . . . , M − x q ( j + 1) = g j +1 ( x (0) , q ( x (0)) andat time M we evaluate the feedback again and repeat theprocess. We obtain inductively for k ∈ N , j = 0 , . . . , M − x q ( kM + j + 1) = g j +1 (cid:0) x q ( kM ) , q ( x q ( kM )) (cid:1) . In the sequel we use the notation u q ∈ U N to denote thesequence of control inputs generated by the repeated ap-plication of the feedback q and we denote interchangeably x ( · , ξ, u q ) = x q ( · ) . (2) Definition 3.
Let a measurement function ω : R n → R ≥ and some M ∈ N ∗ be given. A map q : X → U M iscalled an admissible finite-step feedback (of length M ) forsystem (1) , if for all ξ ∈ X and all j = 1 , . . . , M , thefollowing properties hold: (i) g j (cid:0) ξ, q ( ξ ) (cid:1) ∈ X ; (ii) ξ (cid:55)→ g j (cid:0) ξ, q ( ξ ) (cid:1) is K -bounded on X with respect to ω ,i.e., there exist κ j ∈ K such that ω (cid:16) g j ( ξ, q ( ξ )) (cid:17) ≤ κ j (cid:0) ω ( ξ ) (cid:1) ∀ ξ ∈ X . (3)Condition (i) of Definition 3 justifies the terminology admissible as it ensures that trajectories of the closed-loopsystem obtained by applying the map q to (1) stay in X . Inaddition, condition (ii) ensures that along trajectories themeasure ω remains bounded on bounded time intervals. Definition 4.
Consider system (1) and let a measurementfunction ω : R n → R ≥ and some M ∈ N ∗ be fixed. Con-sider an admissible finite-step feedback q : X → U M forsystem (1) . We say that q asymptotically ω -stabilizes theset A := ω − (0) , if there exist β ∈ KL and γ ∈ K suchthat for all ξ ∈ X and all t ∈ N we have ω (cid:0) x ( t, ξ, u q ) (cid:1) ≤ β (cid:0) ω ( ξ ) , t (cid:1) . (4) In this case, the resulting closed-loop system x q ( t + 1) = g (cid:0) x q ( t ) , u q ( t ) (cid:1) , t ∈ N , (5)3 s asymptotically ω -stable in A . If the function β in (4) can be taken as β ( r, s ) = Cσ s r, (6) with C ≥ and σ ∈ [0 , , then we call q exponentially ω -stabilizing. Note that standard asymptotic stability of the origin isobtained by taking the measurement function ω ( · ) = |·| . Remark 5.
We note that while the definition of the con-cept of ω -stabilization looks familiar, some care has to beapplied in its interpretation. As the notion of a measure-ment function is quite general and as we do not assumecontinuity of the closed-loop system several surprising ef-fects can appear. In particular, in the generality of Defi-nition 4 the following situations cannot be ruled out: (i) A is compact and all trajectories not starting in A diverge to ∞ or to the boundary of X . This requiresdiscontinuity of q . (ii) The feedback q is continuous, A is unbounded andfor certain trajectories dist A ( x q ( · , x )) is strictly in-creasing. (iii) Given ε > there is no δ > such that dist A ( x ) < δ implies dist A ( x q ( t, x )) < ε for all t ≥ .Examples for these effects are easy to construct and weleave the details to the reader. There are easy addi-tional assumptions that remove these peculiarities. Forinstance, one could assume that there is α ∈ K such that α (dist A ( x )) ≤ ω ( x ) for all x ∈ X . This assumption al-ready rules out (i) and (ii). Now we introduce finite-step control Lyapunov func-tions , which is the key concept used for the control designin the next section.
Definition 6.
Let α, α ∈ K ∞ , M ∈ N ∗ and ω : R n → R ≥ be a measurement function. Consider a continuousfunction V : R n → R ≥ satisfying for all ξ ∈ R n , α ( ω ( ξ )) ≤ V ( ξ ) ≤ α ( ω ( ξ )) . (7) The function V is called a finite-step control Lyapunovfunction (fs-CLF) (for the time step M ) for system (1) ifthere exists an admissible finite-step feedback q : X → U M for (1) and a function α ∈ K ∞ , α < id such that for all ξ ∈ X , V ( x ( M, ξ, u q )) ≤ α ( V ( ξ )) . (8) Remark 7.
If the conditions in Definition 6 are satis-fied with M = 1 , we call V a control Lyapunov function (CLF). We note that this definition of a CLF differs fromthe usual definition in the literature by the assumption thatthe Lyapunov function comes together with an admissiblefeedback. This is equivalent to the fact that the controlvalue u realizing the decrease of the Lyapunov functionsatisfies the constraint f ( x, u ) ∈ X , because once such a control value exists, the existence of a — possibly discon-tinuous — admissible feedback is immediate. In this sense,Definition 6 extends the definition of a CLF.In the case M = 1 , the understanding of a CLF is thefollowing: The existence of a CLF ensures the existenceof an admissible feedback control law for which the result-ing CLF is a Lyapunov function, implying asymptotic ω -stability of system (5) .Definition 6 now demands that the same is true for M > ,a similar reasoning applies: The existence of a fs-CLF en-sures the existence of an admissible finite-step feedback forwhich the resulting fs-CLF is a finite-step Lyapunov func-tion, again implying asymptotic ω -stability, see [2]. Similarly as for (1-step) CLFs also the existence of afs-CLF yields asymptotic stability as shown next.
Proposition 8.
Let V : R n → R ≥ be a fs-CLF for mea-surement function ω : R n → R ≥ . Let q be the admissiblefinite-step feedback associated to V . Then q asymptotically ω -stabilizes the level set A for system (1) .Proof. The invariance of the set X , i.e. g i (cid:0) x, q ( x ) (cid:1) ∈ X forall x ∈ X and all i = 1 , . . . , M , is ensured by the admissiblefinite-step feedback q by definition. With this observation,the asymptotic ω -stability of system (5) follows directlyfrom [6, Theorem 7]. As stated in Proposition 8, system (1) is asymptotically ω -stabilized in A if a fs-CLF and its associated admissiblefinite-step feedback q are given. Generally speaking, it isan open problem to find a (finite-step) (control) Lyapunovfunction candidate V . Most existing converse Lyapunovtheorems for nonlinear systems are not constructive in thesense that the results are not usually useful for control pur-poses. Recently, constructive converse Lyapunov theoremshave been introduced in the case of asymptotic stabilitywith respect to the origin in [2, Theorem 13]. Here we ex-tend Theorem 13 in [2] to the case of asymptotic stabilitywith respect to closed sets. Our results show that, undera certain condition, the measurement function itself is afinite-step control Lyapunov function for the system. Theorem 9.
Consider system (1) with measurementfunction ω : R n → R ≥ , M ∈ N ∗ and an admissible finite-step feedback q : X → U M . Assume that the resultingclosed-loop system (5) satisfies (4) with β ( r, M ) < r (9) for all r > . Then the function V : R n → R ≥ defined by V ( ξ ) := ω ( ξ ) ∀ ξ ∈ R n , (10) is a finite-step control Lyapunov function for the time step M for (1) with α = α = id and α ( r ) = β ( r, M ) . roof. This is proved using the same arguments as thosein the proof of [2, Theorem 13].Theorem 9 states that, under condition (9), a measure-ment function is a fs-CLF. It is not hard to see that con-dition (9) always holds for exponentially stable systems.Moreover, there exist systems which are not exponentiallystable, but only asymptotically stable and satisfy condi-tion (9) (cf. [2, Example 16] for more details). Theorem 9can, therefore, be used for controller design: Assume thatsystem (1) is asymptotically ω -stabilized by a feedback q in A . Motivated by Theorem 9, one can take ω as the fs-CLF.In particular, if system (1) is exponentially ω -stabilizablein A , then ω is always a fs-CLF for the system and only M needs to be determined.
4. fs-CLF-Based MPC Approaches
This section elaborates how to construct stabilizing feed-back laws via fs-CLFs. In particular, we reformulate thecontrol problem into an optimization problem which canbe solved efficiently.
To derive an optimization-based controller design, weimpose the following problem.
Problem 10.
Consider system (1) . Let ω be a measure-ment function. Let M ∈ N ∗ and a fs-CLF V for the timestep M with the associated decay function α ∈ K ∞ , α < id be given. Also, let x (0) =: ξ ∈ X be given. Compute u ∗ M = (cid:0) u ∗ , . . . , u ∗ M − (cid:1) ∈ U M as an optimal solution of thefollowing optimal control problemmin u M =( u ,...,u M − ) M − (cid:88) i =0 V (cid:0) x ( i, ξ, u M ) (cid:1) s . t . for all j ∈ { , ..., M − } x ( j + 1) = g ( x ( j ) , u j ) u j ∈ U g ( x ( j ) , u j ) ∈ X V ( x ( M, ξ, u M )) ≤ α ( V ( ξ )) . (OCP-1)We note that under our general assumptions an optimalinput u ∗ M need not exist for OCP-1. A minimal require-ment is controlled invariance of X , which we tacitly assumefrom now on. Even then the existence of u ∗ M is not guar-anteed. In the sequel, we will assume this existence forthe sake of simplicity. Otherwise similar arguments canbe applied using approximately optimal inputs. A sim-ilar comment holds for the optimal control problems weformulate below.Note that M in OCP-1 also determines the optimizationhorizon of the problem. Here we make use of the optimalcontrol sequence obtained from OCP-1 as an admissible finite-step feedback. This implies that the controller com-municates with the sensor every M time steps and gener-ates an optimal control sequence of length M by solvingOCP-1. Then the whole optimal control sequence is ap-plied to the system and the procedure is repeated. Thisprocedure is summarized by the following algorithm. Algorithm 11.
At each time step t = kM , k ∈ N : Measure the state x ( t ) ∈ X of system (1) . Set ξ := x ( t ) , solve Problem 10 and denote the optimalcontrol sequence satisfying (OCP-1) by u ∗ M . Define the finite-step feedback control value ˆ q ( ξ ) by ˆ q ( ξ ) := u ∗ M ( ξ ) (11) and apply it to system (1) on the time interval kM, . . . , ( k + 1) M − , Go to Step 1.
We note that that the map ξ (cid:55)→ ˆ q ( ξ ) implicitly definedin (11) is an admissible feedback. The following lemmashows that even small perturbations of such a feedbackare admissible, which accommodates computational errorsthat are to be expected in applications. Lemma 12.
Let ω : R n → R ≥ be a measurement func-tion. Assume V : R n → R ≥ is a fs-CLF with asso-ciated admissible finite-step feedback q . Then a feedback h : X → U M is admissible, if it satisfies the constraints ofOCP-1 and if in addition M − (cid:88) i =0 V (cid:0) x ( i, ξ, h ( ξ )) (cid:1) ≤ M − (cid:88) i =0 V (cid:0) x ( i, ξ, q ( ξ )) (cid:1) ∀ ξ ∈ X . (12) Proof.
The requirement of invariance of X is part of theassumption, so that we only need to check K -boundednessof the maps g j for h . To this end note that for j =1 , . . . , M − ω ( g j ( ξ, h ( ξ ))) ≤ α − ◦ V ( g j ( ξ, h ( ξ ))) ≤ α − (cid:32) M − (cid:88) i =0 V ( g i ( ξ, h ( ξ ))) (cid:33) ≤ α − (cid:32) M − (cid:88) i =0 V ( g i ( ξ, ˆ q ( ξ ))) (cid:33) ≤ α − (cid:32) M − (cid:88) i =0 α ◦ ω ( g i ( ξ, ˆ q ( ξ ))) (cid:33) ≤ α − (cid:32) M − (cid:88) i =0 α ◦ κ i ( ω ( ξ )) (cid:33) , where the κ i are the functions guaranteed by (3) for theadmissible feedback ˆ q . Finally, for j = M and all ξ ∈ X we have using the constraints of OCP-1 that ω ( g M ( ξ, h ( ξ ))) ≤ α − ◦ V ( g M ( ξ, h ( ξ ))) ≤ α − ◦ α ◦ α ( ω ( ξ )) . This shows the assertion.Now we show that solving Problem 10 provides anadmissible finite-step feedback which renders system (1)asymptotically ω -stable.5 roposition 13. Consider system (1) and let a mea-surement function ω : R n → R ≥ as well as a fs-CLF V : R n → R ≥ be given. Let q be the admissible finite-step feedback associated with the fs-CLF V . Then theadmissible finite-step feedback (11) obtained from Algo-rithm 11 yields an admissible feedback which asymptoti-cally ω -stabilizes the set A := ω − (0) .Proof. The feasibility of the Problem 10 is guaranteed bythe existence of the admissible finite-step feedback q gen-erated by the fs-CLF V and our standing assumption thatmaximizing arguments exist. It follows from (OCP-1) thatfor all ξ ∈ X M − (cid:88) i =0 V (cid:0) x ( i, ξ, u ˆ q ) (cid:1) ≤ M − (cid:88) i =0 V (cid:0) x ( i, ξ, u q ) (cid:1) (13)and Lemma 12 the feedback defined by (11) is admissi-ble. Take any ξ ∈ X . For any t = kM + j , k ∈ N , j ∈ { , . . . , M − } we have x ( t, ξ, u ˆ q ) = x ( j, x ( kM, ξ, u ˆ q ) , u ˆ q ( · + kM )) . (14)With (13) we obtain M − (cid:88) i =0 V (cid:0) x ( i, x ( kM, ξ, u ˆ q ) , u ˆ q ) (cid:1) ≤ M − (cid:88) i =0 V (cid:0) x ( i, x ( kM, ξ, u ˆ q ) , u q ) (cid:1) . (15)Moreover, it follows from (3) and (7) that M − (cid:88) i =0 V (cid:0) x ( i, x ( kM, ξ, u ˆ q ) , u q ) (cid:1) ≤ M max (cid:110) α (cid:0) ω ( x ( kM, ξ, u ˆ q )) (cid:1) , max i ∈{ ,...,M − } α ◦ κ i (cid:0) ω (cid:0) x ( kM, ξ, u ˆ q ) (cid:1)(cid:1)(cid:111) =: κ (cid:0) ω (cid:0) x ( kM, ξ, u ˆ q ) (cid:1)(cid:1) . (16)It follows from (15) and (16) that for all i ∈ { , . . . , M − } M − (cid:88) i =0 V (cid:0) x ( i, x ( kM, ξ, u ˆ q ) , u ˆ q ) (cid:1) ≤ κ (cid:0) ω (cid:0) x ( kM, ξ, u ˆ q ) (cid:1)(cid:1) . (17)It follows from the first inequality of (7) that for all i ∈{ , . . . , M − } ω (cid:0) x ( i, x ( kM, ξ, u ˆ q ) , u q ) (cid:1) ≤ α − ◦ κ (cid:0) ω (cid:0) x ( kM, ξ, u ˆ q ) (cid:1)(cid:1) =: γ (cid:0) ω (cid:0) x ( kM, ξ, u ˆ q ) (cid:1)(cid:1) . For
M > α /M := χ ∈ K ∞ a fixedsolution of the equation χ M = α , which exists by [26,Proposition 3.1], though it may not be unique. Then for t ≥
0, the function α t/M ∈ K ∞ is the t -fold composition of χ . As α < id, it follows that α /M < id, because thecondition α /M ( r ) ≥ r leads by induction to α ( t +1) /M ( r ) ≥ α t/M ( r ) ≥ r , t ∈ N . But the latter condition for t = M implies that α ( r ) ≥ r , whence r = 0.Now as α /M < id, it follows for all r > t (cid:55)→ α t/M ( r ) , t ∈ N is strictly decreasing to 0 as t →∞ . As the map is strictly decreasing we may interpolatelinearly in each interval [ t, t + 1] , t ∈ N , to obtain a strictlydecreasing map defined on all of [0 , ∞ ). With slight abuseof notation we continue to call this map α · /M ( r ). With thisconvention, the function ( r, t ) (cid:55)→ α t/M ( r ) is in KL . Alsowith the decomposition t = kM + j , j ∈ { , . . . , M − } ,we obtain that α t/M ◦ α − = α k ◦ α − ( M − j ) /M ≥ α k . From the last two inequalities we can conclude ω (cid:0) x ( t, ξ, u ˆ q ) (cid:1) ≤ γ (cid:0) ω ( x ( kM, ξ, u ˆ q )) (cid:1) ≤ γ ◦ α − (cid:0) V ( x ( kM, ξ, u ˆ q )) (cid:1) ≤ γ ◦ α − ◦ α k (cid:0) V ( ξ ) (cid:1) ≤ γ ◦ α − ◦ α k ◦ α (cid:0) ω ( ξ ) (cid:1) ≤ γ ◦ α − ◦ α tM ◦ α − ◦ α (cid:0) ω ( ξ ) (cid:1) =: β (cid:0) ω ( ξ ) , t (cid:1) . It is easy to see that β ∈ KL , as α · /M ( · ) is. See also [1,Lemma 4.2] for a discussion of the necessary details.We note that one has to make some standard convexityassumption on the dynamics g to guarantee OCP-1 is nu-merically solvable via existing algorithms. We emphasizethat OCP-1 needs no knowledge of an admissible control q . The difficultly in the computation of u ˆ q via OCP-1 is,however, the need for the knowledge of a fs-CLF before-hand and the choice of a suitable time-step. As discussedin Section 3.2, a fs-CLF candidate can be chosen as thecorresponding measurement function for which only thetime-step M remains to be determined. An obvious drawback of the control scheme proposed byProposition 13 is that it only communicates with the sen-sor every M time-steps. Hence, the control loop is closedless often than that for a classic closed-loop control, whichmay make the system less robust with respect to pertur-bations. As shown in [10, 11], a remedy to this problemis to re-compute the remaining part of the optimal controlsequence at each time instant. This amounts to solving anoptimal control problem with shortened horizon. Problem 14.
Consider system (1) . Let a measurementfunction ω , M ∈ N ∗ and a fs-CLF V : R n → R ≥ with as-sociated decay function α ∈ K ∞ , α < id be given. Further-more, let j ∈ { , . . . , M } . For a given initial value ˜ ξ ∈ X consider a control sequence ˜u = (˜ u , . . . , ˜ u M − j − ) satisfy-ing x ( M − j, ˜ ξ, ˜u ) ∈ X . Define x (0) = ξ := x ( M − j, ˜ ξ, ˜u ) . ompute u j = (cid:0) u ( ξ ) , . . . , u j ( ξ ) (cid:1) as the optimal solutionthe following optimal control problemmin u j =( u j (0) ,...,u j ( j − j − (cid:88) i =0 V (cid:0) x ( i, ξ, u j ) (cid:1) s . t . for all (cid:96) ∈ { , ..., j − } x ( (cid:96) + 1) = g ( x ( (cid:96) ) , u j ( (cid:96) )) u j ( (cid:96) ) ∈ U g ( x ( (cid:96) ) , u j ( (cid:96) )) ∈ X V ( x ( j, ξ, u j )) ≤ α ( V ( ˜ ξ )) . (OCP-2 j )Note that feasibility of Problem 14 depends, amongothers, on the initial control sequence ˜u . However, it isnot hard to see that if we consider a control sequence ˆu = (ˆ u , . . . , ˆ u M ) solving Problem 10, then for any j =1 , . . . , M and initial control sequence ˜u := (ˆ u , . . . , ˆ u M − j )a solution of Problem 14 is given by u j = (ˆ u M j +1 , . . . , ˆ u M ).The idea is to iteratively solve Problem 14 and only to ap-ply the first control value to shrink the horizon by one.The algorithm for such a control strategy is formalized asfollows. Algorithm 15.
At each time step t = kM + j , k ∈ N , j = 0 , . . . , M − : Measure the state x ( t ) ∈ X of system (1) . Set ξ := x ( t ) , j = j ( t ) and solve Problem 14 and denotethe optimal control sequence satisfying (OCP-2 j ) by u ∗ j . Set the control value to u ( k ) = u ∗ j (0) (18) and apply it to system (1) at time t = kM + j . Go to Step 1.
Remark 16.
We note that by the optimality principle [7,Corollary 3.16] the solutions to Algorithm 11 and Algo-rithm 15 coincide in the absence of perturbations.
To illustrate the two proposed algorithms, we give anexample.
Example 17.
Here we consider an illustrative numeri-cal example for which we compare the two different MPCapproaches. Since both algorithms produce identical resultsin the case without perturbations, we compare them for thesituation in which the controller is derived by optimizingover the nominal, i.e., unperturbed system but then appliedto a perturbed system. We consider the nominal systemdescribed by x +1 = x + x x +2 = x + x x +3 = x + u (19) and the corresponding perturbed system x +1 = x + x + 0 . k/ x +2 = x + x x +3 = x + u. (20) Note that the nominal system (19) is open-loop unstable.Motivated by the converse Lyapunov function result inTheorem 9 we start by considering the candidate fs-CLF V ( x ) = x (cid:62) P x with P = .
250 1 0 . .
25 0 .
25 1 , which is obviously of the form (10) . This choice of thematrix P contains cross terms between the states. It iseasy to check that the function V thus defined is an M -stepLyapunov function for M = 3 . However, in order to obtainmore pronounced differences between Algorithms 11 and15, we used M = 6 in the simulations. Moreover, we used α ( r ) = 0 . r in both Problem 10 and 14 and all simulationswere performed with the initial condition ξ = ( − , , T .Figure 2 illustrates the state trajectories correspondingto the nominal case for Algorithm 11. n -2-10 1 2 x ( n ) , x ( n ) , x ( n ) Figure 2: State trajectories of the nominal system (19), x (black ◦◦◦ ), x (red ××× ), x (blue (cid:3)(cid:3)(cid:3) ) with control input computed via Algo-rithm 11. The case in which the control sequence computed by Al-gorithm 11 is applied to the perturbed system (20) is de-picted by Figure 3. One clearly sees that the x -component,in which the perturbation enters in (20) , is more stronglyaffected by the perturbation than the other components ofthe solution.Finally, Figure 4 illustrates the state trajectories as-sociated with the shrinking horizon strategy with re-optimization, i.e., Algorithm 15 applied to the perturbedsystem (20) . It may be observed that the re-optimizationon shrinking horizons is able to mitigate the effect ofthe perturbation, as the maximal deviation of the x -component from the desired equilibrium x = 0 after thetransient phase is reduced by about 37%, from 0.615 to0.387.
10 20 30 40 50 60 n -2-10 1 2 x ( n ) , x ( n ) , x ( n ) Figure 3: State trajectories of the perturbed system (20), x (black ◦◦◦ ), x (red ××× ), x (blue (cid:3)(cid:3)(cid:3) ) with control input computedvia Algorithm 11. n -2-10 1 2 x ( n ) , x ( n ) , x ( n ) Figure 4: State trajectories of the perturbed system (20), x (black ◦◦◦ ), x (red ××× ), x (blue (cid:3)(cid:3)(cid:3) ) with control input computed viaAlgorithm 15. The shrinking horizon method is a rather unusual wayof obtaining a feedback law via optimization based tech-niques. More commonly, one would use a classic MPCapproach, in which the optimization is performed at ev-ery time step over a fixed horizon length N and alwaysthe first element of the resulting control sequence is im-plemented. In this section we show that this approach canalso be applied using fs-CLFs. To this end, we considerthe following optimal control problem. Problem 18.
Consider system (1) and let
M, N ∈ N ∗ .Let ω be a measurement function and V be a fs-CLFwith the associated decay function α ∈ K ∞ , α < id be given. Also, let x (0) =: ξ ∈ X . Compute u ∗ N = (cid:0) u ∗ ( ξ ) , . . . , u ∗ N − ( ξ ) (cid:1) as the optimal solution of the follow- ing optimal control problem (OCP-3)min u N N − (cid:88) i =0 V (cid:0) x ( i, ξ, u N ) (cid:1) s . t . for all j ∈ { , . . . , N − } x ( j + 1) = g ( x ( j ) , u j ) u k ∈ U g ( x ( j ) , u j ) ∈ X . (OCP-3)Here we make use of the feedback signal at every timestep. To do this, one can solve Problem 18 every singletime-step and apply the first element of the correspondingoptimal control sequence u N to the system and then the(OCP-3) is solved again. This procedure is summarizedby the following algorithm. Algorithm 19.
At each time step t ∈ N : Measure the state x ( t ) ∈ X of system (1) . Set ξ := x ( t ) , solve Problem 18 and denote the optimalcontrol sequence satisfying (OCP-3) by u ∗ N . Define the MPC-feedback value ˆ q MPC by ˆ q MPC ( ξ ) := u ∗ ( ξ ) (21) and apply it to system (1) . Go to Step 1.
The solution to the resulting MPC closed-loop systemstarting from some initial value ξ and with optimizationhorizon N is denoted by x MPC( N ) ( · , ξ ). We denote the optimal value function related to Problem 18 by V N ( ξ ) := N − (cid:88) i =0 V (cid:0) x ( i, ξ, u ∗ N ) (cid:1) . (22)In order to analyze the resulting MPC closed-loop system,we make use of the following result. Definition 20.
We say that the MPC scheme describedin Algorithm 19 is semiglobally practically asymptotically ω -stabilizing with respect to the optimization horizon N in A := ω − (0) , if there exists β ∈ KL such that the fol-lowing property holds: for each δ > and ∆ > δ thereexists N δ, ∆ ∈ N ∗ such that for all optimization horizons N ≥ N δ, ∆ and all ξ ∈ R n with ω ( ξ ) < ∆ the closed-loopsolutions x MPC( N ) ( · , ξ ) satisfy ω (cid:0) x MPC( N ) ( t, ξ ) (cid:1) ≤ max (cid:8) β (cid:0) ω ( ξ ) , t (cid:1) , δ (cid:9) ∀ t ∈ N . Proposition 21.
Let ω : R n → R ≥ be a measurementfunction and V be a fs-CLF. Assume that there is a K ∞ -function σ such that the optimal value function V N from (22) satisfies V N ( ξ ) ≤ δ ( V ( ξ )) , ∀ x ∈ X , N ∈ N ∗ . Then the MPC scheme obtained from Algorithm 19is semiglobally practically asymptotically ω -stabilizing in := ω − (0) for system (1) with respect to the optimiza-tion horizon N . If, moreover, σ is a linear function, i.e., σ ( r ) = γr for some γ ∈ R , then the resulting MPC closed-loop is asymptotically ω -stable in A := ω − (0) for all N > ln( γ − γ − ln( γ − .Proof. The first statement is proved by following similararguments as those in [7, Theorem 6.37]. For the secondstatement, see [7, Corollary 6.21 and Remark 6.22]. Wenote that Theorem 6.37 and Corollary 6.21 in [7] considerstabilization at an equilibrium point. However, it is nothard to see that with the obvious modification of the ar-guments in these references we obtain that V N (cid:0) x MPC( N ) ( t, ξ )) (cid:1) ≤ max { ˜ β ( V N ( ξ ) , i ) , ˜ δ } for all ξ with V N ( ξ ) ≤ (cid:101) ∆, (cid:101) ∆ >
0, ˜ β ∈ KL , and ˜ δ ≥ (cid:101) ∆ → ∞ and ˜ δ → N → ∞ . Moreover, theinequality holds for arbitrarily large (cid:101) ∆ > δ = 0 if σ is linear. Now, the inequalities V ( ξ ) ≤ V N ( ξ ) ≤ δ ( V ( ξ )) , which follow by definition of V N and by the assumptionof the proposition, imply that V N is a Lyapunov functionfor the closed loop, which proves (practical) asymptoticstability.In order to check whether the optimal value function(22) satisfies the conditions in Proposition 21, we makethe following observation: From (7) and (3) it follows thatthere exists a K ∞ -function ˆ κ such that for each admissiblefinite-step feedback control law q : X → U M the inequality V (cid:16) g i (cid:0) ξ, q ( ξ ) , . . . , q i ( ξ ) (cid:1)(cid:17) ≤ ˆ κ ( V ( ξ )) (23)holds for all i = 1 , . . . , M −
1. This fact is easily verifiedfor ˆ κ ( r ) = max i =1 ,...,M − α ◦ κ i ◦ α − ( r ). Now we give themain result of this section. Theorem 22.
Consider system (1) and let
M, N ∈ N ∗ .Let ω : R n → R ≥ be a measurement function and V be afs-CLF for the step size M . Then, the following statementshold.(i) If in (8) α ( s ) = cs with c ∈ [0 , and in (23) ˆ κ ( r ) = dr with d > , then the MPC closed-loop is asymptoti-cally ω -stable with respect to the optimization horizon N in A := ω − (0) for all N > ln( γ − γ − ln( γ − with γ = M d/ (1 − c ) .(ii) If in (8) α ( s ) = cs with c ∈ [0 , and ˆ κ in (23) satisfies ˆ κ ( r ) ≤ q max { r a , r b } for constants a, b, q > , then the MPC scheme is semiglobally practicallyasymptotically ω -stabilizing with respect to the opti-mization horizon N in A .(iii) There exists ρ ∈ K ∞ such that if we replace V by (cid:101) V = ρ ( V ) in Problem 18, then the MPC schemeis semiglobally practically asymptotically ω -stabilizingwith respect to the optimization horizon N in A . Proof. (i) Iterating (8) yields the existence of a con-trol function u satisfying V ( x ( kM, ξ, u )) ≤ c k V ( ξ ) and V ( x ( kM + j, ξ, u )) ≤ dV ( x ( kM, ξ, u )) for all k ∈ N and j = 0 , . . . , M −
1. Together this yields V ( x ( kM + j, ξ, u )) ≤ c k dV ( ξ ) , which implies for K ∈ N such that KM ≥ NV N ( ξ ) ≤ N − (cid:88) i =0 V (cid:0) x ( i, ξ, u ) (cid:1) ≤ K − (cid:88) k =0 M − (cid:88) j =0 V (cid:0) x ( kM + j, ξ, u ) (cid:1) ≤ K − (cid:88) k =0 M c k dV ( ξ ) ≤ M d − c V ( ξ ) . Now the second part of Proposition 21 with δ ( r ) = Md − c r yields the claim.(ii) Iterating (8), as in (i) we obtain the inequality V ( x ( kM + j, ξ, u )) ≤ q max { ( c k V ( ξ )) a , ( c k V ( ξ )) b }≤ q max { c a , c b } k max { V ( ξ ) a , V ( ξ ) b } . Abbreviating ˆ c = max { c a , c b } ∈ [0 ,
1) the same computa-tion as in (i) yields V N ( ξ ) ≤ K − (cid:88) k =0 M q ˆ c k max { V ( ξ ) a , V ( ξ ) b }≤ M q − ˆ c max { V ( ξ ) a , V ( ξ ) b } . This implies that the assumptions of the first part ofProposition 21 are satisfied with δ ( r ) = Mq − ˆ c max { r a , r b } and the claim follows.(iii) Recall ˆ κ from (23). From (8) it follows thatˆ κ ( V ( x ( M, ξ, u q ))) ≤ ˆ κ ◦ α ( V ( ξ )) = ˆ κ ◦ α ◦ ˆ κ − (cid:124) (cid:123)(cid:122) (cid:125) =: µ (ˆ κ ( V ( ξ )) . Since α < id it follows that µ < id. Hence, applyingProposition 3.2 from [27] to (cid:98) V = ˆ κ ( V ) implies that thereexists ρ ∈ K ∞ and λ ∈ (0 ,
1) such that W := ρ ( (cid:98) V ) satisfies W ( x ( M, ξ, u q )) ≤ λW ( ξ ). Iterating this inequality yieldsthe existence of u with W ( x ( kM, ξ, u )) ≤ λ k W ( ξ ) V ( x ( kM + j, ξ, u )) ≤ ˆ κ ( V ( x ( kM, ξ, u ))) . For (cid:101) V = ρ ( V ) this implies (cid:101) V ( x ( kM + j, ξ, u )) = ρ ( V ( x ( kM + j, ξ, u )) ≤ ρ ◦ ˆ κ ( V ( x ( kM, ξ, u ))= W ( x ( kM, ξ, u )) ≤ λ k W ( ξ ) = λ k ρ ◦ ˆ κ ◦ ρ − (cid:124) (cid:123)(cid:122) (cid:125) =: σ ∈K ∞ ( (cid:101) V ( ξ )) . V N ( ξ ) ≤ K − (cid:88) k =0 M − (cid:88) j =0 (cid:101) V (cid:0) x ( kM + j, ξ, u ) (cid:1) ≤ K − (cid:88) k =0 M λ k σ ( (cid:101) V ( ξ )) ≤ M − λ σ ( (cid:101) V ( ξ )) . Hence, the assumptions of the first part of Proposition 21are satisfied for (cid:101) V in place of V with δ ( r ) = M σ ( r ) / (1 − λ ).Thus, Proposition 21 yields the assertion. Example 23.
We illustrate the performance of the clas-sic MPC approach again for the nominal and perturbedsystems (19) and (20) . We use the same initial con-dition ξ = ( − , , T as in Example 17, the fs-CLF V ( x ) = x T P x from Example 17 as stage cost and the op-timization horizon N = 6 . Figure 5 shows the resultingstate trajectory for applying the control computed by Algo-rithm 19 to the perturbed system (20) . The effect of theperturbation is comparable to the updated shrinking horizonMPC Algorithm in Figure 4; after the transient phase themaximal deviation of x to the desired equilibrium is 0.363here compared to 0.387 in the shrinking horizon algorithm.However, one observes that the trajectories in Figure 5 ap-pear smoother than those in Figure 4. Further numericaltests have revealed that the loss of smoothness in Figure 4is mainly due to the contractive constraints and not dueto the shrinking horizon. Hence, this is an advantage forMPC without using contraction constraints. However, weemphasize that we have not rigorously checked the assump-tions of Theorem 22 (which are usually quite conservative,anyway), but rather determined the optimization horizon N by trial and error. Hence, in contrast to Algorithms 11and 15, there is no formal guarantee for asymptotic stabil-ity here. n -2-10 1 2 x ( n ) , x ( n ) , x ( n ) Figure 5: State trajectories of the perturbed system (20), x (black ◦◦◦ ), x (red ××× ), x (blue (cid:3)(cid:3)(cid:3) ) with control input computed viaAlgorithm 19.
5. Conclusions and outlook
We have exploited the notion of fs-CLF to develop con-trol design approaches for discrete-time systems. To thisend, the controller design problem has been reformulatedinto an optimization problem. Motivated by state-of-the-art applications, we have provided three different MPCschemes via fs-CLFs: i) contractive multi-step MPC, ii)contractive updated multi-step MPC, iii) classic MPCwithout stabilizing terminal constraints. We have illus-trated the MPC schemes via an example.The results of the paper can be extended in several di-rections: fs-LFs are leveraged to develop nonconservative small-gain and dissipativity conditions for stability anal-ysis of large-scale systems [28, 3, 4, 6]. We aim to fusethe results of the current paper with the nonconservativesmall-gain and dissipativity to develop distributed MPCschemes. Applications of such results to smart grids, smartcity and mobile robots are expected. The analysis in thiswork can also be generalized to systems subject to distur-bances.
References [1] H. K. Khalil,
Nonlinear systems , 3rd ed. Englewood Cliffs, NJ:Prentice-Hall, 2002.[2] R. Geiselhart, R. H. Gielen, M. Lazar, and F. R. Wirth, “Analternative converse Lyapunov theorem for discrete-time sys-tems,”
Syst. Control Lett. , vol. 70, pp. 49–59, 2014.[3] R. Geiselhart, M. Lazar, and F. R. Wirth, “A relaxed small-gain theorem for interconnected discrete-time systems,”
IEEETrans. Autom. Control , vol. 60, no. 3, pp. 812–817, 2015.[4] R. H. Gielen and M. Lazar, “On stability analysis methods forlarge-scale discrete-time systems,”
Automatica , vol. 55, pp. 66–72, 2015.[5] R. Geiselhart and F. R. Wirth, “Relaxed ISS small-gain the-orems for discrete-time systems,”
SIAM J. Control Optim. ,vol. 54, no. 2, pp. 423–449, 2016.[6] N. Noroozi, R. Geiselhart, L. Gr¨une, B. S. R¨uffer, and F. R.Wirth, “Nonconservative discrete-time ISS small-gain condi-tions for closed sets,”
IEEE Trans. Autom. Control , vol. 63,no. 5, pp. 1231–1242, 2018.[7] L. Gr¨une and J. Pannek,
Nonlinear Model Predictive Control:Theory and Algorithms , 2nd ed. London: Springer, 2017.[8] P. Varutti and R. Findeisen, “Compensating network delaysand information loss by predictive control methods,” in
Eur.Control Conf. , Budapest, Aug 2009, pp. 1722–1727.[9] L. Gr¨une, J. Pannek, and K. Worthmann, “A prediction basedcontrol scheme for networked systems with delays and packetdropouts,” in
48h IEEE Conf. Decision Control , Shanghai, Dec2009, pp. 537–542.[10] L. Gr¨une and V. G. Palma, “Robustness of performance andstability for multistep and updated multistep MPC schemes,”
Discrete Continuous Dyn. Syst. - A , vol. 35, p. 4385, 2015.[11] L. Gr¨une and M. Sigurani, “A Lyapunov based nonlinear small-gain theorem for discontinuous discrete-time large-scale sys-tems,” in
Proc. 21st Int. Symp. Mathematical Theory NetworksSyst. , 2014, pp. 439–446.[12] M. Alamir, “Contraction-based nonlinear model predictive con-trol formulation without stability-related terminal constraints,”
Automatica , vol. 75, pp. 288–292, 2017.[13] S. L. de Oliveira Kothare and M. Morari, “Contractive modelpredictive control for constrained nonlinear systems,”
IEEETrans. Autom. Control , vol. 45, no. 6, pp. 1053–1071, 2000.
14] J. Wan, “Computationally reliable approaches of contractiveMPC for discrete-time systems,” Ph.D. dissertation, Universityof Girona, Girona, 2007.[15] T. H. Yang and E. Polak, “Moving horizon control of nonlinearsystems with input saturation, disturbances and plant uncer-tainty,”
Int. J. Control , vol. 58, no. 4, pp. 875–903, 1993.[16] K. Worthmann, “Stability Analysis of Unconstrained RecedingHorizon Control Schemes,” PhD Thesis, Universit¨at Bayreuth,2011.[17] J. Hanema, M. Lazar, and R. T´oth, “Stabilizing tube-basedmodel predictive control: Terminal set and cost constructionfor LPV systems,”
Automatica , vol. 85, pp. 137–144, 2017.[18] M. Lazar and V. Spinu, “Finite-step terminal ingredients forstabilizing model predictive control,” in , Seville, Spain, Sep 2015, pp.9–15.[19] L. Gr¨une, “Analysis and design of unconstrained nonlinearMPC schemes for finite and infinite dimensional systems,”
SIAM J. Control Optim. , vol. 48, pp. 1206–1228, 2009.[20] L. Gr¨une, J. Pannek, M. Seehafer, and K. Worthmann, “Analy-sis of unconstrained nonlinear MPC schemes with time varyingcontrol horizon,”
SIAM J. Control Optim. , vol. 48, pp. 4938–4962, 2010.[21] G. Grimm, M. J. Messina, S. E. Tuna, and A. R. Teel, “Modelpredictive control: for want of a local control Lyapunov func-tion, all is not lost,”
IEEE Trans. Autom. Control , vol. 50, no. 5,pp. 546–558, 2005.[22] S. E. Tuna, M. J. Messina, and A. R. Teel, “Shorter horizons formodel predictive control,” in
Amer. Control Conf. , Minneapo-lis, 2006, pp. 863–868.[23] L. Gr¨une and A. Rantzer, “On the infinite horizon performanceof receding horizon controllers,”
IEEE Trans. Automat. Con-trol , vol. 53, pp. 2100–2111, 2008.[24] C. M. Kellett, “A compendium of comparison function results,”
Math. Control Signals Syst. , vol. 26, no. 3, pp. 339–374, 2014.[25] R. Geiselhart and N. Noroozi, “Equivalent types of ISS Lya-punov functions for discontinuous discrete-time systems,”
Au-tomatica , vol. 84, pp. 227–231, 2017.[26] R. Geiselhart and F. Wirth, “Solving iterative functional equa-tions for a class of piecewise linear (cid:107) ∞ -functions,” J. Math.Anal. Appl. , vol. 411, no. 2, pp. 652 – 664, 2014.[27] L. Gr¨une and C. M. Kellett, “ISS-Lyapunov functions for dis-continuous discrete-time systems,”
IEEE Trans. Autom. Con-trol , vol. 59, no. 11, pp. 3098–3103, 2014.[28] N. Noroozi and B. S. R¨uffer, “Non-conservative dissipativityand small-gain theory for ISS networks,” in , Los Angeles, 2014, pp. 3131–3136., Los Angeles, 2014, pp. 3131–3136.