Approximation of Continuous-Time Infinite-Horizon Optimal Control Problems Arising in Model Predictive Control - Supplementary Notes
AApproximation of Continuous-Time Infinite-HorizonOptimal Control Problems Arising in Model PredictiveControl - Supplementary Notes
Michael Muehlebach and Raffaello D’Andrea ∗ Abstract
These notes present preliminary results regarding two different approximations of linear infinite-horizon optimal control problems arising in model predictive control. Input and state trajectories areparametrized with basis functions and a finite dimensional representation of the dynamics is obtained viaa Galerkin approach. It is shown that the two approximations provide lower, respectively upper boundson the optimal cost of the underlying infinite dimensional optimal control problem. These bounds gettighter as the number of basis functions is increased. In addition, conditions guaranteeing convergenceto the cost of the underlying problem are provided.
Model predictive control (MPC) takes input and state constraints fully into account and is therefore apromising control strategy with various applications. The standard MPC approach relies on discrete dy-namics and a finite prediction horizon, which leads inevitably to issues related to closed-loop stability.In [5], an approximation of the underlying infinite dimensional infinite-horizon optimal control problemhas been proposed, which is based on a parametrization of input and state trajectories with basis functions.The infinite prediction horizon is maintained, and therefore closed-loop stability and recursive feasibil-ity arise naturally from the problem formulation. Moreover, it is conjectured that the underlying infinitedimensional optimization problem is well-approximated even with a low basis function complexity.Herein, we compare the approach from [5] to a different finite dimensional approximation. We analyzeboth with respect to convergence of the optimal costs as the number of basis functions is increased. Inparticular, the optimal cost of the approximation given in [5] decreases monotonically and approaches thecost of underlying infinite dimensional problem from above. It is shown that the corresponding optimaltrajectories are guaranteed to converge and that the second approximation approaches the optimal costof the infinite dimensional problem from below. In addition, we will establish conditions guaranteeingconvergence of both approximations to the cost of the underlying infinite dimensional problem.This report focuses on the technical proofs and complements [4], where the underlying ideas are dis-cussed in detail and a numerical example is provided. ∗ Michael Muehlebach and Raffaello D’Andrea are with the Institute for Dynamic Systems and Control, ETH Zurich. Thecontact author is Michael Muehlebach, [email protected] . This work was supported by ETH-Grant ETH-48 15-1. a r X i v : . [ m a t h . O C ] A ug Problem Formulation
We present and analyze two approximations of the following optimal control problem, J ∞ := inf 12 || x || + 12 || u || s.t. ˙ x ( t ) = Ax ( t ) + Bu ( t ) , x (0) = x ,x ( t ) ∈ X , u ( t ) ∈ U , ∀ t ∈ [0 , ∞ ) ,x ∈ L n , u ∈ L m , ˙ x ∈ L n , (1)where X and U are closed and convex subsets of R n and R m , respectively, containing ; the space of squareintegrable functions mapping from [0 , ∞ ) to R q is denoted by L q , where q is a positive integer; and the L q -norm is defined as L q → R , x → || x || := (cid:90) ∞ x T x d t, (2)where d t denotes the Lebesgue measure. We assume that J ∞ is finite and that the corresponding minimizers, x and u , are unique.Input and state trajectories will be approximated as a linear combination of basis functions τ i ∈ L , i = 1 , , . . . , that is ˜ x s ( t, η x ) := ( I n ⊗ τ s ( t )) T η x , ˜ u s ( t, η u ) := ( I m ⊗ τ s ( t )) T η u , (3)where ⊗ denotes the Kronecker product, η x ∈ R ns and η u ∈ R ms are the parameter vectors, and τ s ( t ) :=( τ ( t ) , τ ( t ) , . . . , τ s ( t )) ∈ R s . In order to simplify notation we will omit the superscript s in τ s , ˜ x s , and ˜ u s ,and simply write τ , ˜ x , and ˜ u whenever the number of basis functions is clear from the context. Similarly,the dependence of ˜ x and ˜ u on η x and η u is frequently omitted. Without loss of generality we assume thatthe basis functions are orthonormal. Note that orthonormal basis functions can be constructed with theGram-Schmidt procedure, [3, p. 50].As motivated in [5], the following additional assumptions on the basis functions are made:A1) They are linearly independent.A2) They fulfill ˙ τ ( t ) = M τ ( t ) for all t ∈ [0 , ∞ ) , for some matrix M ∈ R s × s . The eigenvalues of M havestrictly negative realparts. In [5], the following finite dimensional approximation of the original problem (1) is introduced, J s := inf 12 || ˜ x || + 12 || ˜ u || s.t. (cid:90) ∞ ( I n ⊗ τ ) (cid:0) A ˜ x + B ˜ u − ˙˜ x (cid:1) d t = 0 , ˜ x (0) − x = 0 , η x ∈ X s , η u ∈ U s . (4)Note that the subscript s refers to the number of basis functions used. More precisely, the optimizationproblem with optimal cost J s corresponds to the case where input and state trajectories are spanned by thefirst s basis functions. The trajectories ˜ x and ˜ u , which satisfy the equality constraint, fulfill the equationsof motion exactly, that is, A ˜ x ( t ) + B ˜ u ( t ) = ˙˜ x ( t ) for all t ∈ [0 , ∞ ) and ˜ x (0) = x , see [5]. This is neededto guarantee that the minimizers of (4) achieve the cost J s on the nominal system. We make the followingassumptions on the sets X s and U s The assumptions are listed for the state constraints X and are analogous for the input constraints U . X s is closed and convexB1) i s ( X s ) ⊂ X s +1 B2) η x ∈ X s implies ( I n ⊗ τ ( t )) T η x ∈ X for all t ∈ [0 , ∞ ) ,where the inclusion i s , mapping from R ns to R n ( s +1) is defined by ˜ x s ( t, η x ) = ˜ x s +1 ( t, i s ( η x )) , ∀ t ∈ [0 , ∞ ) , ∀ η x ∈ R ns . (5)Assumption B0) implies that the optimization problem (4) is convex, and that corresponding minimizersexist, provided the existence of feasible trajectories. Assumption B1) is used to show that the cost J s ismonotonically decreasing in s , whereas Assumption B2) implies that J s is bounded below by J ∞ , see Sec. 3.In the context of MPC, Assumption B2) guarantees recursive feasibility and closed-loop stability, [5]. Inaddition, the cost J s is achieved on the nominal system, as the resulting input and state trajectories respectinput and state constraints and fulfill the dynamics exactly.The following alternative approximation is introduced, ˜ J s := inf 12 || ˜ x || + 12 || ˜ u || s.t. (cid:90) ∞ ( I n ⊗ τ ) (cid:0) A ˜ x + B ˜ u − ˙˜ x (cid:1) d t − ( I n ⊗ τ (0)) (˜ x (0) − x ) = 0 ,η x ∈ ˜ X s , η u ∈ ˜ U s , (6)whose purpose is to provide a monotonically increasing sequence ˜ J s bounding J ∞ from below. To thatextent, the following assumptions on the sets ˜ X s and ˜ U s are made: C0) ˜ X s is closed and convex.C1) π s ( ˜ X s +1 ) ⊂ ˜ X s .C2) For each x ∈ L n with x ( t ) ∈ X for all t ∈ [0 , ∞ ) it holds that π s ( x ) ∈ ˜ X s ,where the projections π s and π s are defined as π s : L n → R ns , x → (cid:90) ∞ ( I n ⊗ τ s ) x d t, (7) π s : R n ( s +1) → R ns , η x → π s (cid:0) ˜ x s +1 ( · , η x ) (cid:1) . (8)Assumption C0) ensures that the optimization problem (6) is convex, and that corresponding minimizersexist, provided the existence of feasible trajectories. Assumption C1) is used to demonstrate that ˜ J s ismonotonically increasing, whereas Assumption C2) implies that ˜ J s is bounded above by J ∞ , see Sec. 3.Examples fulfilling Assumptions B0)-B2) and C0)-C2) are provided in [4]. In the following we will analyze the two approximations (4) and (6) and prove the following result:
Theorem 3.1
Let N be such that J N is finite.1) If Assumptions B0), B1), and B2) hold, then the sequence J s is monotonically decreasing for s ≥ N ,converges as s → ∞ , and is bounded below by J ∞ . The corresponding optimizers ˜ x and ˜ u convergestrongly in L n , respectively L m as s → ∞ .2) If Assumptions C0), C1), and C2) are fulfilled, then ˜ J s is monotonically increasing for s ≥ , convergesas s → ∞ , and is bounded above by J ∞ . The assumptions are again listed for the state constraints X and are analogous for the input constraints U .
3e start by summarizing the results from [5], stating that the optimal cost of (4) is a monotonicallydecreasing sequence providing an upper bound on the optimal cost of (1).
Proposition 3.2
Let N be such that J N is finite. If Assumptions B0) and B1) are fulfilled, then the se-quence J s is monotonically decreasing for s ≥ N and converges as s → ∞ . If Assumptions B0) and B2)are fulfilled, J s is bounded below by J ∞ . Proof
See [5].The fact that J s converges can be used to demonstrate convergence of the optimizer ˜ x s and ˜ u s , as wellas the corresponding parameters η x s and η u s . Therefore, the parameter vectors η x s and η u s are interpretedas square summable sequences, i.e. as elements in (cid:96) . Proposition 3.3
Let N be such that J N is finite and let Assumptions B0) and B1) be fulfilled. Then,the minimizers of (4) converge (strongly) in (cid:96) as s → ∞ , and the corresponding trajectories ˜ x s and ˜ u s converge (strongly) in L n , respectively L m . Proof
We fix s ≥ N and denote the minimizers corresponding to J s by η x s , η u s , and the minimizerscorresponding to J s +1 by η x s +1 , η u s +1 , which we consider to be elements of (cid:96) . The following observationcan be made: The vectors η x := λi s ( η x s ) + (1 − λ ) η x s +1 and η u := λi s ( η u s ) + (1 − λ ) η u s +1 with λ ∈ [0 , are feasibly candidates for the optimization problem (4) over s + 1 basis functions. This is because theconstraints i s ( η x ) ∈ X s +1 , i s ( η u ) ∈ U s +1 are satisfied by Assumptions B0) (convexity) and B1). Moreover,as the dynamics are fulfilled exactly it follows for ˜ x := ( I n ⊗ τ ) T η x and ˜ u := ( I m ⊗ τ ) T η u that ˜ x ( t ) = λ ˜ x s ( t ) + (1 − λ )˜ x s +1 ( t )= e At x + (cid:90) t e A ( t − ˆ t ) B ( λ ˜ u s (ˆ t ) + (1 − λ )˜ u s +1 (ˆ t ))dˆ t = e At x + (cid:90) t e A ( t − ˆ t ) B ˜ u (ˆ t )dˆ t, where ˜ x s := ( I n ⊗ τ ) T η x s , ˜ u s := ( I n ⊗ τ ) T η u s , ˜ x s +1 := ( I n ⊗ τ ) T η x s +1 , and ˜ u s +1 := ( I n ⊗ τ ) T η u s +1 ,which concludes that the equality constraint is likewise fulfilled. Since η x s +1 and η u s +1 are the minimizerscorresponding to J s +1 , we have that J s +1 ≤ || ˜ x || + 12 || ˜ u || , (9)for all λ ∈ [0 , . The objective function is quadratic, and hence strongly convex with respect to the L n and L m -norm, which leads to J s +1 ≤ λJ s + (1 − λ ) J s +1 − λ (1 − λ ) || ˜ x s − ˜ x s +1 || − λ (1 − λ ) || ˜ u s − ˜ u s +1 || , (10)for all λ ∈ [0 , . We set λ = 1 / and obtain || ˜ x s − ˜ x s +1 || + || ˜ u s − ˜ u s +1 || ≤ J s − J s +1 ) . (11)According to Prop. 3.2, the sequence J s converges as s → ∞ . As a result, it follows that (˜ x s , ˜ u s ) ∈ L n × L m is a Cauchy sequence. The space L is a Banach space, [7, p. 67] and so is L n × L m . Consequently, (˜ x s , ˜ u s ) converges strongly as s → ∞ , [8, p. 4]. The orthonormality of the basis functions implies byBessel’s inequality, [3, p. 51] that the parameters η x s and η u s form a Cauchy sequence in (cid:96) × (cid:96) . Thesquare summable sequences form likewise a Banach space and therefore the parameter vectors η x s and η u s converge strongly in (cid:96) as s → ∞ . The set of square summable sequences is denoted by (cid:96) . Notation is slightly abused, since i s is used to denote both the inclusion R ns → R n ( s +1) and R ms → R m ( s +1) , which isdefined in analogy to (5).
4e establish results for the optimization problem (6), which are similar to the ones given by Prop. 3.2.More precisely, we will show that under Assumptions C0)-C2) the optimal cost of (6) bounds J ∞ frombelow and is monotonically increasing in s . Proposition 3.4
Let Assumptions C0) and C2) be fulfilled. Then ˜ J s ≤ J ∞ holds for all s ≥ . Proof
We will denote the minimizers of (1) by x and u , which are both square integrable and fulfill ˙ x ( t ) = Ax ( t ) + Bu ( t ) for all t ∈ [0 , ∞ ) . We define η x := π s ( x ) ∈ R ns , η u := π s ( u ) ∈ R ms (where notation isslightly abused to denote both the projection from L n → R ns and the projection from L m → R ms , definedin analogy to (7), by π s ). From Assumption C2) it follows that η x ∈ ˜ X s and η u ∈ ˜ U s . We will argue that η x and η u fulfill the equality constraints in (6). Therefore we rewrite the equality constraint as (cid:0) A ⊗ I s − I n ⊗ M T − I n ⊗ ( τ (0) τ (0) T ) (cid:1) η x + ( B ⊗ I s ) η u + ( I n ⊗ τ (0)) x = 0 , (12)where orthonormality of the basis functions, the properties of the Kronecker product, and Assumption A2)is used. We note further that the identity M = (cid:90) ∞ ˙ τ τ T d t = − τ (0) τ (0) T − (cid:90) ∞ τ ˙ τ T d t = − τ (0) τ (0) T − M T , (13)which follows from integration by parts, simplifies the previous equation to ( A ⊗ I s + I n ⊗ M ) η x + ( B ⊗ I s ) η u + ( I n ⊗ τ (0)) x = 0 . (14)Moreover, it holds that (cid:90) ∞ ( I n ⊗ τ ) ˙ x d t = − ( I n ⊗ τ (0)) x − ( I n ⊗ M ) (cid:90) ∞ ( I n ⊗ τ ) x d t (15) = − ( I n ⊗ τ (0)) x − ( I n ⊗ M ) η x (16) = ( A ⊗ I s ) η x + ( B ⊗ I s ) η u , (17)where integration by parts (1st step), the definition of the projection π s (2nd step), and the fact that x and u fulfill the (linear) equations of motion exactly (3rd step) has been used. Clearly, (16) and (17) are equivalentto (14) and therefore η x and η u are feasible candidates for (6). Bessel’s inequality, [3, p. 51], implies that | η x | + 12 | η u | = 12 | π s ( x ) | + 12 | π s ( u ) | ≤ || x || + 12 || u || , (18)where the Euclidean norm is denoted by | · | . Therefore η x and η u are feasible candidates achieving a costthat is smaller than J ∞ , and hence, ˜ J s ≤ J ∞ for all s ≥ s .In order to establish that the sequence ˜ J s is monotonically increasing, we will work with the dualproblem. It turns out that the finite dimensional representation of the adjoint equations are fulfilled exactlyby (6). We will use this fact to construct feasible candidates for the optimization over s + 1 basis functions. Proposition 3.5
Let Assumptions C0), C1), and C2) be fulfilled. Then ˜ J s is monotonically increasing andbounded above by J ∞ for all s ≥ . Proof
We first derive the dual of (6). We use Lagrange duality to rewrite (6) as ˜ J s = inf η x ,η u sup η p (cid:90) ∞
12 ˜ x T ˜ x + 12 ˜ u T ˜ u + ˜ p T (cid:0) A ˜ x + B ˜ u − ˙˜ x (cid:1) d t − ˜ p (0) T (˜ x (0) − x ) ,η x ∈ ˜ X s , η u ∈ ˜ U s . (19)5rom Assumptions C0) and C1) we can infer that ˜ J s ≤ J ∞ for all s ≥ by Prop. 3.4. The fact that ≤ ˜ J s ≤ J ∞ implies further that the infimum in (6) is attained, and that the set of minimizers is nonemptydue to Assumption C0) ( ˜ X s and ˜ U s are closed). According to [6, p. 503, Thm. 11.39] strong duality holds,and the infimum and supremum can be interchanged, which yields ˜ J s = sup η p inf η x ,η u (cid:90) ∞
12 ˜ x T ˜ x + 12 ˜ u T ˜ u + ˜ p T (cid:0) A ˜ x + B ˜ u − ˙˜ x (cid:1) d t − ˜ p (0) T (˜ x (0) − x ) ,η x ∈ ˜ X s , η u ∈ ˜ U s . (20)The minimization over η x and η u is a convex problem and can be rewritten in terms of convex-conjugatefunctions, [6, p. 473]. To that extent, we apply first integration by parts on the term ˜ p T ˙˜ x , resulting in ˜ J s = sup η p inf η x ,η u (cid:90) ∞
12 ˜ x T ˜ x + ˜ x T (cid:0) A T ˜ p + ˙˜ p (cid:1) + 12 ˜ u T ˜ u + ˜ u T B T ˜ p d t + ˜ p (0) T x ,η x ∈ ˜ X s , η u ∈ ˜ U s . (21)By defining ˜ v ( t, η v ) := ( I n ⊗ τ ( t )) T η v such that (cid:90) ∞ δ ˜ λ T (˜ v + A T ˜ p + ˙˜ p ) d t = 0 , δ ˜ λ := ( I n ⊗ τ ) T δη λ , for all δη λ ∈ R ns , which is equivalent to − ˜ v := A T ˜ p + ˙˜ p as shown in [5], the minimization over ˜ x can beinterpreted as a (extended real-valued) function of ˜ v , i.e. inf η x ∈ ˜ X s (cid:90) ∞
12 ˜ x T ˜ x − ˜ x T ˜ v d t = − sup η x ∈ ˜ X s (cid:90) ∞ ˜ x T ˜ v −
12 ˜ x T ˜ x d t (22) = − sup π s (˜ x ) ∈ ˜ X s (cid:90) ∞ ˜ x T ˜ v −
12 ˜ x T ˜ x d t =: − I ∗ ϕ s (˜ v ) . (23)Note that I ∗ ϕ s maps from L n to the extended real line and is well-defined. In a similar way, we can regardthe minimization over η u as (extended real-valued) function of ˜ p , inf π s (˜ u ) ∈ ˜ U s (cid:90) ∞
12 ˜ u T ˜ u + ˜ u T B T ˜ p d t =: − I ∗ ψ s ( − B T ˜ p ) , (24)where in this case π s denotes the projection L m → R sm defined in analogy to (7) (with a slight abuse ofnotation). Thus, (21) is reformulated as ˜ J s = sup η p ∈ R ns − I ∗ ϕ s (˜ v ) − I ∗ ψ s ( − B T ˜ p ) + ˜ p (0) T x , s.t. (cid:90) ∞ ( I n ⊗ τ ) (cid:0) ˙˜ p + A T ˜ p + ˜ v (cid:1) d t = 0 . (25)The functions ˜ v and ˜ p satisfy the adjoint equations exactly and it holds that lim t →∞ ˜ p ( t ) = 0 by AssumptionA1).Let η v ∈ R ns and η p ∈ R ns , with corresponding trajectories ˜ v s ( t, η v ) and ˜ p s ( t, η p ) , be maximizers of(25). The set of maximizers is non-empty due to the fact that we optimize over R ns and ≤ ˜ J s ≤ J ∞ holds. The equality constraint implies that the adjoint equation ˙˜ p s ( t ) + A T ˜ p s ( t ) + ˜ v s ( t ) = 0 is fulfilled forall times t ∈ [0 , ∞ ) , see [5], and thus, the adjoint equation is likewise fulfilled by the augmented trajectories ˜ v s +1 ( t, i s ( η v )) and ˜ p s +1 ( t, i s ( η p )) . Hence, ˜ v s +1 ( t, i s ( η v )) and ˜ p s +1 ( t, i s ( η p )) are feasible candidates to theoptimization (25) over s + 1 basis functions, and it holds that ˜ p s +1 (0 , i s ( η p )) = ˜ p s (0 , η p ) . It remains to6stablish the relation between I ∗ ϕ s and I ∗ ϕ s +1 , as well as I ∗ ψ s +1 and I ∗ ψ s , which is done via the order reversingproperty of the convex-conjugation. Therefore the function I ∗ ϕ s is regarded as the conjugate of I ϕ s ( x ) := (cid:40) || ˜ x s ( t, π s ( x )) || π s ( x ) ∈ ˜ X s , ∞ otherwise . (26)We note that Assumption C1) implies I ϕ s +1 ( x ) ≥ I ϕ s ( x ) for all x ∈ L n . This is due to the fact thatany square integrable function x with π s +1 ( x ) ∈ ˜ X s +1 automatically fulfills π s ( x ) ∈ ˜ X s , since π s ( ˜ X s +1 ) is contained in ˜ X s by Assumption C1), and | π s +1 ( x ) | ≥ | π s ( x ) | holds for all x ∈ L n . The convex-conjugation reverses ordering, which implies I ∗ ϕ s +1 ( v ) ≤ I ∗ ϕ s ( v ) (27)for all v ∈ L n , see [1, Prop. 4.4.1, p. 171]. The same reasoning applies to I ∗ ψ s , which is the convex-conjugateof I ψ s ( u ) := (cid:40) || ˜ u s ( t, π s ( u )) || π s ( u ) ∈ ˜ U s , ∞ otherwise . (28)This leads to the conclusion that − I ∗ ϕ s ( v ) − I ∗ ψ s ( − B T p ) ≤ − I ∗ ϕ s +1 ( v ) − I ∗ ψ s +1 ( − B T p ) , (29)for any v, p ∈ L n . Hence, we have that ˜ v s +1 ( t, i s ( η v )) and ˜ p s +1 ( t, i s ( η p )) are feasible candidates to theoptimization problem over s + 1 basis functions with higher corresponding cost and therefore ˜ J s +1 ≥ ˜ J s .Next, we would like to establish that lim s →∞ ˜ J s = lim s →∞ J s . In order to do so, we need the followingassumptions:D0) lim sup s →∞ ˜ X s ⊂ lim inf s →∞ X s D1) The basis functions τ i , i = 1 , , . . . , are dense in C ∞ (in the topology of uniform convergence). Proposition 3.6
Let N be such that J N is finite and let Assumptions B0)-D1) be fulfilled. Then, lim s →∞ ˜ J s =lim s →∞ J s holds. Proof
By Prop. 3.2 and Prop. 3.3 it follows that J s is monotonically decreasing, lim s →∞ J s is finite, and thatthe corresponding optimizers converge. From Prop. 3.4 and Prop. 3.5, we can infer that ˜ J s is monotonicallyincreasing and bounded above by J ∞ for all s ≥ . This implies further that the sequence of minimizersof (6) is bounded in the L -sense. Due to the fact that L (and likewise L n × L m ) is a Hilbert space, thereexists a subsequence s ( q ) such that the corresponding minimizer of (6) converge weakly, i.e. ˜ x s ( q ) (cid:42) ˜ x , ˜ u s ( q ) (cid:42) ˜ u , [2, p. 163].We pick any δp := ( δp , . . . , δp n ) T , with δp i ∈ C ∞ , i = 1 , , . . . , n , and δp (0) = 0 , and choose asequence δ ˜ p k = (cid:80) ki =1 τ i δη p i , δη p i ∈ R n , converging uniformly to δp . According to Assumption D1) such asequence exists. Hence for any (cid:15) > we can find an integer N large enough, such that | δ ˜ p k (0) T (˜ x s ( q ) (0) − x ) | ≤ | ˜ x s ( q ) (0) − x | (cid:15) holds for all k ≥ N . We claim that | ˜ x s ( q ) (0) − x | is uniformly bounded. This can be seen by rightmultiplying the equality constraint of (6) by η T x , resulting in (cid:90) ∞ ˜ x s ( q ) T ( A ˜ x s ( q ) + B ˜ u s ( q ) − ˙˜ x s ( q ) )d t − ˜ x s ( q ) (0) T (˜ x s ( q ) (0) − x ) = 0 , (30) The set of smooth functions with compact support mapping from [0 , ∞ ) to R is denoted by C ∞ . | x | + (cid:90) ∞ ˜ x s ( q ) T ( A ˜ x s ( q ) + B ˜ u s ( q ) )d t = 12 | ˜ x s ( q ) (0) − x | , (31)using lim t →∞ ˜ x s ( q ) ( t ) = 0 (by Assumption A1)) and completing the squares. From the fact that ˜ J s ( q ) ≤ J ∞ for all q , it follows that ˜ x s ( q ) and ˜ u s ( q ) are bounded in L n , respectively L m . As a consequence, | ˜ x s ( q ) (0) − x | is uniformly bounded, as can be verified with the Cauchy-Schwarz inequality, lim k →∞ δ ˜ p k (0) T (˜ x s ( q ) (0) − x ) converges uniformly, and the limits over q and k can be interchanged, lim q →∞ lim k →∞ δ ˜ p k (0) T (˜ x s ( q ) (0) − x ) = lim k →∞ lim q →∞ δ ˜ p k (0) T (˜ x s ( q ) (0) − x ) = 0 . (32)The equality constraint of (6) reads therefore as lim k →∞ lim q →∞ (cid:90) ∞ δ ˜ p T k ( A ˜ x s ( q ) + B ˜ u s ( q ) − ˙˜ x s ( q ) )d t = lim q →∞ lim k →∞ (cid:90) ∞ δ ˜ p T k ( A ˜ x s ( q ) + B ˜ u s ( q ) − ˙˜ x s ( q ) )d t = 0 , (33)where both limits agree. We will show that the limit over k commutes with the integration. To that extent,we make the following claim: For any function ˜ v := (cid:80) Ni =1 τ i η v i , where η v i are bounded vectors in R n and N is a positive integer, it holds that lim k →∞ (cid:90) ∞ δ ˜ p T k ˜ v d t = (cid:90) ∞ δp T ˜ v d t. (34)We will prove the claim below, but assume for now that it holds. As a consequence of Assumption A1),implying that ˙˜ x s ( q ) is a linear combination of the basis functions, the claim results in lim k →∞ (cid:90) ∞ δ ˜ p k ˙˜ x s ( q ) d t = (cid:90) ∞ δp T ˙˜ x s ( q ) d t, (35)for any integer s ( q ) . Using integration by parts (twice) and the fact that ˜ x s ( q ) converges weakly leads to lim q →∞ (cid:90) ∞ δp T ˙˜ x s ( q ) d t = lim q →∞ − (cid:90) ∞ δ ˙ p T ˜ x s ( q ) d t (36) = − (cid:90) ∞ δ ˙ p T ˜ x d t (37) = (cid:90) ∞ δp T ˙˜ x d t. (38)Note that δ ˙ p has compact support, is bounded (by continuity), and is therefore square integrable in [0 , ∞ ) .The claim implies further that (33) simplifies to q →∞ (cid:90) ∞ δp T ( A ˜ x s ( q ) + B ˜ u s ( q ) − ˙˜ x s ( q ) )d t (39) = (cid:90) ∞ δp T ( A ˜ x + B ˜ u − ˙˜ x )d t. (40)The same argument can be repeated for any δp = ( δp , . . . δp n ) , δp i ∈ C ∞ , i = 1 , , . . . , n , vanishing at ,and therefore, as s ( q ) → ∞ , the equality constraint of (6) reads as (cid:90) ∞ δp T ( A ˜ x + B ˜ u − ˙˜ x )d t, ∀ δp ∈ C ∞ . ˙˜ x ( t ) = A ˜ x ( t ) + B ˜ u ( t ) for all t ∈ [0 , ∞ ) (almost everywhere). A similar argument based on variations that do not vanish attime ensures lim q →∞ ˜ x s ( q ) (0) = x . As a result, the equality constraint of (6) is equivalent to the one of (4)in the limit as s ( q ) → ∞ . Combined with Assumption D1), it implies that ˜ x and ˜ u are feasible candidatesfor (4) and therefore lim q →∞ ˜ J s ( q ) ≥ lim s →∞ J s . From Prop. 3.2, Prop. 3.4, and Prop. 3.5 it follows that ˜ J s is monotonically increasing and bounded by J ∞ ≤ J s for all s ≥ N , resulting in lim q →∞ ˜ J s ( q ) = lim s →∞ ˜ J s = lim s →∞ J s . (41)It remains to prove the claim. Let ˜ v := (cid:80) Ni =1 τ i η v i where the η v i s are bounded vectors in R n and N isfixed. The matrix M in Assumption A1) is negative definite and therefore it holds that | ˜ v ( t ) | ≤ C e − βt for all t ≥ T for some constants C > , β > and time T > . As a result, it follows from H¨older’sinequality, [7, p. 76], that (cid:12)(cid:12)(cid:12) (cid:90) ∞ ( δ ˜ p k − δp ) T ˜ v d t (cid:12)(cid:12)(cid:12) ≤ sup t ∈ [0 , ∞ ) | δ ˜ p k ( t ) − δp ( t ) | (cid:90) ∞ | ˜ v | d t. (42)The second term can be bounded by invoking H¨older’s inequality once more, (cid:90) ∞ | ˜ v | d t ≤ (cid:90) T | ˜ v | d t + (cid:90) ∞ T C e − βt d t ≤ T || ˜ v || + C β e − βT . (43)Hence, the right-hand side of (42) converges to zero due to the uniform convergence of the δ ˜ p k to δp as k → ∞ . This proves the claim. We introduced two different approximations to a class of infinite-horizon optimal control problems encoun-tered in MPC. The approximations bound the optimal cost of the underlying problem from above and below,and their optimal costs converge as the number of basis functions tends to infinity. Under favorable circum-stances, the resulting input trajectories of the first approximation are found to approximate the optimal inputof the underlying infinite dimensional problem arbitrarily accurately, and the corresponding optimal costsconverge to the optimal cost of the underlying infinite dimensional problem. The second approximationyields a lower bound on the cost of the underlying optimal control problem, and can therefore be used toquantify the approximation quality of both approximations.
Acknowledgment
The first author would like to thank Jonas L¨uhrmann for a fruitful discussion regarding the proof ofProp. 3.6.
References [1] J. M. Borwein and J. D. Vanderwerff.
Convex Functions . Cambridge University Press, 2010.[2] J. B. Conway.
A Course in Functional Analysis . Springer, second edition, 1990.[3] R. Courant and D. Hilbert.
Methods of Mathematical Physics , volume 1. Interscience Publishers, 1953.94] M. Muehlebach and R. D’Andrea. Approximation of continuous-time infinite-horizon optimal controlproblems arising in model predictive control.
Proceedings of the IEEE Conference on Decision andControl , 2016.[5] M. Muehlebach and R. D’Andrea. Parametrized infinite-horizon model predictive control for lineartime-invariant systems with input and state constraints.
Proceedings of the American Control Confer-ence , pages 2669–2674, 2016.[6] R. T. Rockafellar and R. J.-B. Wets.
Variational analysis . Springer, 2009.[7] W. Rudin.
Real and Complex Analysis . McGraw-Hill, third edition, 1987.[8] W. Rudin.
Functional Analysis . McGraw-Hill, second edition, 1991.[9] L. C. Young.