Consistent Approximations to Impulsive Optimal Control Problems
Daniella Porto, Geraldo Nunes Silva, Heloísa Helena Marino Silva
aa r X i v : . [ m a t h . O C ] J u l Consistent Approximations to Impulsive Optimal ControlProblems
Daniella Porto , Geraldo Nunes Silva , Helo´ısa Helena Marino Silva
Depto. de Matemtica Aplicada, IBILCE, UNESP,15054-000, S˜ao Jos´e do Rio Preto, SPE-mail: [email protected], [email protected], [email protected]
Abstract:
We analyse the theory of consistent approximations given by Polak in [18], and weuse it in an impulsive optimal control problem. We reparametrize the original system and buildconsistent approximations for this new reparametrized problem. So, we prove that if a sequenceof solution of the consistent approximations is converging, it will converge to a solution of thereparametrized problem, and, finally, we show that from a solution of the reparametrized problemwe can find a solution of the original one.
In this paper, we study impulsive optimal control problems in which we apply a theory calledconsistent approximations. This theory was introduced in [17], [18], and uses approximatedproblems of finite dimension. From an infinite dimension problem ( P ), we can build a sequenceof problems ( P N ), where these problems have finite dimension and epi-converge (convergencebetween the epigraphs) to ( P ). This convergence ensures that every sequence of global orlocal minimum of ( P N ) that converges, will converge to a global or local minimum of ( P ),respectively. It is necessary to use optimality functions to represent the first-order necessaryconditions because for optimal control with state and control constraints, that are complex, itis easier to work with optimality functions than with classical forms as first-order necessarycondition. In [18] is given an application of this theory by using an optimal control problem.There exist many papers where impulsive control systems are studied, we cite, for example,[22], [26]. The article [22] shows that the solution set of an impulsive system, given by dif-ferential inclusion, is weakly ∗ closed and the article [26] builds a numerical approximation forthe impulsive system, also given by differential inclusion, using the Euler’s discretization. It isshown that there is a subsequence of the solution sequence obtained by this discretization thatgraph-converges to a solution of the original system. In this paper we propose an approximationby absolutely continuous measures, using the convergence of graph-measure.There exist a great number of papers that talk about impulsive optimal control problemswhere the control systems involve measures, and that discuss theoretical conditions for optimalityof control process. We can cite, for example, [1], [11], [15], [16], [20], [21], [23]. On the otherhand, the literature about numerical methods for impulsive optimal control problems is ratherscarce, [4]. In this paper, a space-time reparametrization of the impulsive problem is adopted,an approximation scheme for that augmented system is construct and it is proved that suchapproximation converge to the value function of the augmented problem. Finally, the sequenceof discrete optimal controls converge to an optimal control for the continuous problem.Regarding usual optimal control problems, there are some works that aim to solve them usingdiscrete approximation by Euler [5], [6], or Runge Kutta [7], [9], [10]. The scheme used is 1)discretize the optimal control problem and 2) solve the resulting nonlinear optimization problem.The choice of the method of resolution depends on the structure of the optimal control problemand personal preferences. Among the several proposals for solutions of nonlinear optimizationproblems arising from discretization, we cite some more recent [12] and [13].his work aims to contribute with the presentation of the Euler’s method application forimpulsive optimal control problems. We show that an impulsive optimal control problem canbe reparametrized and discretized by Euler’s method to generate a subsequence of optimaltrajectories of Euler that converge to an optimal trajectory of the reparametrized problem, usingan appropriate metric. From that we can find the optimal solution to the continuous problem.We are given a generalization of valid results for non impulsive optimal control problems, [5].In Section 2 we introduce the theory of consistent approximations. We define the impulsivesystem and get the reparametrized system in Section 3. In Section 4 is defined the impulsive op-timal control problem. We establish the approximated problems to our reparametrized optimalcontrol problem in Section 5, and finally, the consistent approximations is given in Section 6. Wealso show the convergence between a sequence of global or local minimum to the approximatedproblems and the local or global minimum to the reparametrized problem in the same section.Section 7 has bounds on approximations errors and Section 8 conclusions. Let B be a normed space. Consider the problem( P ) min x ∈ S C f ( x ) , where f : B → R is continuous and S C ⊂ B .Let N be an infinite subset of N and { S N } N ∈N be a family of finite dimension subspaces of B such that S N ⊂ S N if N < N and ∪ S N is dense in B . For all N ∈ N , let f N : S N → R bea continuous function that approximates f ( · ) over S N , and let S C,N ⊂ S N be an approximationof S C . Consider the approximated problems family( P N ) min x ∈ S C,N f N ( x ) , N ∈ N . Define the epigraphs associated to ( P ) and ( P N ), respectively, as E := { ( x, r ) : x ∈ S C , f ( x ) ≤ r } and E N := { ( x, r ) : x ∈ S C,N , f N ( x ) ≤ r } . Note that the problems above can be rewritten like( P ) min ( x,r ) ∈ E r ( P N ) min ( x,r ) ∈ E N r, N ∈ N . Note that the unique difference between ( P N ) and ( P ) are the epigraphs of them and if E N converges to E , in the sense of “Kuratowski”, we have a convergence between the problems. InTheorem 3.3.2, [18], the epigraph convergence described is equivalent to the items a) and b) ofthe next definition about consistent approximations. Definition 1.
Let the functions f ( · ) and f N ( · ) and the sets B , S C , S N and S C,N be defined asabove.i) We say P N epi-converge to P ( P N → Epi P ) if:a) For all x ∈ S C there exists a sequence { x N } N ∈N , with x N ∈ S C,N , such that x N → N x , with N → ∞ , and lim N →∞ f N ( x N ) ≤ f ( x ) ;b) For all infinite sequence { x N } N ∈K , K ⊂ N , such that x N ∈ S C,N , for all N ∈ K , and x N K → x ,with N → ∞ , then x ∈ S C and lim N ∈K f N ( x N ) ≥ f ( x ) , as N → ∞ .ii) We say the upper semicontinuous functions γ N : S C,N → R are optimality functions for theproblems ( P N ) if γ N ( η ) ≤ , ∀ η ∈ S C,N and if ˆ η N is a local minimizer of ( P N ) then γ N (ˆ η N ) = 0 . e can define the optimality function γ : S C → R for ( P ) in the same way.iii) The pairs ( P N , γ N ) of the sequence { ( P N , γ N ) } N ∈N are consistent approximations to the pair ( P, γ ) if P N epi-converge to P and for all sequence { x N } N ∈N where x N ∈ S C,N and x N → x ∈ S C we have lim N →∞ γ N ( x N ) ≤ γ ( x ) . The key of that epigraph convergence is given by Theorem 3.3.3, [18], where it is shown thatif ( P N ) epi-converges to ( P ) and if { x N } N ∈N is a sequence of local or global solutions to ( P N )so that x N converges to x , then x is a local or global minimizer of ( P ) and f N ( x N ) converge to f ( x ), N → ∞ , N ∈ N .It is necessary to define the optimality functions since the epi-convergence alone can notguarantee that the sequence of stationary points of ( P N ) converges to a stationary point of ( P ),as we can observe in an example given by [18], page 397. Before we define the impulsive optimal control problem we define the impulsive system that isrelated to it and show some results given by [26].Consider the impulsive system ( dx = f ( x, u ) dt + g ( x ) d Ω , t ∈ [0 , T ] ,x (0) = ξ ∈ C , (1)where f : R n × R m → R n is linear in u , g : R n → M n × q , where M n × q is the space of n × q matrices whose entries are real, C ⊂ R n is closed and convex, the function u : [0 , T ] → R m isBorel measurable and essentially bounded, Ω := ( µ, | ν | , ψ t i ) is the impulsive control, where thefirst component µ is a vectorial Borel measure with range in a convex, closed cone K ⊂ R q + .The second component is such that there exists µ N : [0 , T ] → K so that ( µ N , | µ N | ) → ∗ ( µ, ν ).As K ⊂ R q + we must have ν = | µ | . The functions ψ t i : [0 , → K are associated to the measureatoms, that is, { ψ t i } i ∈I where I is the set of atomic index of the measure µ and we defineΘ := { t i ∈ [0 , T ] : µ ( { t i } ) = 0 } , where µ ( t ) is the vectorial value of the measure in K . Thefunctions ψ t i are measurable, essentially bounded and satisfyi) P qj =1 | ψ jt i ( σ ) | = | µ | ( t i ) a.e. σ ∈ [0 , R ψ jt i ( s ) ds = µ j ( t i ) , j = 1 , , . . . , q, for all t i ∈ Θ.The functions ψ t i ( · ) give us information about the measure µ during the atomic time t i ∈ Θ. We obtain a reparametrized problem that is approximated by using the consistent approxima-tions. This can be done, without loss of information, due to the theorem 1 that is enunciated inthis subsection and was proved in [25]. It says that the reparametrized problem and the originalproblem have equivalents solutions, up to reparametrization. For more information about it see[22], [27], [25], [14].Firstly, we study the impulsive system given by (1). For this, let Ω = ( µ, ν, { ψ t i } t i ∈ Θ ) bean impulsive control and ξ ∈ R n an arbitrary vector. Denote by X t i ( · ; ξ ) the solution to thesystem ( ˙ X t i ( s ) = g ( X t i ( s )) ψ t i ( s ) , s ∈ [0 , , X t i (0) = ξ . Consider x ϑ := ( x ( · ) , {X t i ( · ) } t i ∈ Θ ) , (2)3here ϑ := ( u, Ω), x ( · ) : [0 , T ] → R n is a function of bounded variation with the discontinuitypoints in the set Θ and {X t i ( · ) } t i ∈ Θ is the collection of Lipschitz functions defined above. Thedefinition of solution of the system (1) is given in the sequence. Definition 2.
We say that x ϑ is a solution of (1) if x ( t ) = ξ + Z t f ( x, u ) dσ + Z [0 ,t ] g ( x ) dµ c + X t i ≤ t [ X t i (1) − x ( t i − )] ∀ t ∈ [0 , T ] , where µ c is the continuous component of µ and x ( t i − ) is the left-hand limit of x ( · ) in t i . Now we do a time reparametrization and get a reparametrized system whose solution isequivalent to the solution of the original system (1), up to reparametrization. For this, define π ( t ) := t + | µ | ([0 , t ]) T + k µ k , t ∈ ]0 , T ] , π (0 − ) = 0 . The last equality is a convention because 0 can be an atom of µ .Then, there exists θ : [0 , → [0 , T ] such that • θ ( s ) is non-decreasing; • θ ( s ) = t i ∀ t i ∈ Θ , ∀ s ∈ I i , where I i = [ π ( t i − ) , π ( t i )].We define by F ( t ; µ ) := µ ([0 , t ]) if t ∈ ]0 , T ], and F (0; µ ) = 0 the distribution function of themeasure µ .Let φ : [0 , → R q be given by φ ( s ) := ( F ( θ ( s ); µ ) if s ∈ [0 , \ ( ∪ i ∈I I i ) ,F ( θ ( s ); µ ) + R [ π ( t i − ) ,s ] 1 π ( t i ) − π ( t i − ) ψ t i ( α t i ( σ )) dσ if s ∈ I i , where α t i : [ π ( t i − ) , π ( t i )] → [0 ,
1] is given by α t i ( σ ) = ( σ − π ( t i − )) / ( π ( t i ) − π ( t i − )).According to [14], θ ( · ) and φ ( · ) are Lipschitz. The Lipschitz constants are given by b and r , respectively, and, furthermore, θ ( · ) is absolutely continuous. We say ( θ, φ ) is the graphcompletion of the measure µ .With all the tools on hands we define a reparametrized solution of the system(1). Definition 3.
Let y ( s ) := ( x ( θ ( s )) if s ∈ [0 , \ ( ∪ i ∈I I i ) , X t i ( α t i ( s )) if s ∈ I i , for some i ∈ I . (3) Then y ϑ := y is a reparametrized solution of (1) since y ( · ) is Lipschitz in [0 , and satisfies ( ˙ y ( s ) = f ( y ( s ) , u ( θ ( s ))) ˙ θ ( s ) + g ( y ( s )) ˙ φ ( s ) a.e . s ∈ [0 , ,y (0) = ξ . (4)The next theorem is proved in [25]. Theorem 1.
Suppose that the impulsive control Ω is given and x ϑ is as defined in (2) . Then, y ϑ is a reparametrized solution of (1) if and only if x ϑ is a solution of (1) . The Impulsive Optimal Control Problem
We need to describe the constrains on the control u . We are following [18]. For this, denote by L m [0 , T ] the set of all functions defined from [0 , T ] into R m that have integrable square.Let β max ∈ (0 , + ∞ ) be such that every control belongs to the ball B (0 , β max ) := { u ∈ R m ; k u k ∞ ≤ β max } .Define ˆ U := { u ∈ L m ∞ , [0 , T ]; k u k ∞ ≤ ωβ max } , where ω ∈ (0 ,
1) and L m ∞ , [0 , T ] is the set of all functions defined from [0 , T ] into R m that areessentially bounded and the L norm is considered over it.Now, we define the set of constraints of the control u by U := { u ∈ ˆ U ; u ( t ) ∈ ¯ U ⊂ B (0 , ωβ max ) a.e. t ∈ [0 , T ] } , where ¯ U ⊂ R m is a convex, compact subset of the ball B (0 , ωβ max ).Consider the impulsive optimal control problemmin f ( x (0) , x ( T ))( P ) dx = f ( x, u ) dt + g ( x ) d Ω a.e. t ∈ [0 , T ] ,x (0) ∈ C , u ∈ U , gc sup t ∈ [0 ,T ] | x ( t ) | ≤ L, where f : R n × R n → R is continuous, L > t ∈ [0 ,T ] | x ( t ) | = sup s ∈ [0 , | y ( s ) | . We need the following assumption.
Hiptese 1. a) The functions f ( · , · ) and g ( · ) are C , and there exist constants K ′ , K ′′ ∈ [1 , ∞ [ such that, for all x, ˆ x ∈ R n and u, ˆ u ∈ B (0 , β max ) we have | f ( x, u ) − f (ˆ x, ˆ u ) | ≤ K ′ [ | x − ˆ x | + | u − ˆ u | ] , k g ( x ) − g (ˆ x ) k ≤ K ′′ | x − ˆ x | , and f ( · , · ) and g ( · ) have linear growth, that is, there exists a constant K < ∞ so that | f ( x, u ) | ≤ K (1 + | x | ) e k g ( x ) k ≤ K (1 + | x | ) . b) The function f ( · , · ) is Lipschitz, has first derivative Lipschitz and is C over bounded sets.c) The impulsive system given by dx = f ( x, u ) dt + g ( x ) d Ω a.e. t ∈ [0 , T ] ,x (0) = ξ ∈ C , u ∈ U , gc sup t ∈ [0 ,T ] | x ( t ) | ≤ L, (5) where all the variables are like above, is controllable. Let ( ξ , ξ ) ∈ C × R n be arbitrarily chosen. We say an impulsive system like (5) is control-lable if there exist a control u ∈ U and an impulsive control Ω so that the trajectory related tosuch control x ϑ ( · ) satisfies x (0) = ξ and x ( T ) = ξ and gc sup t ∈ [0 ,T ] | x ( t ) | ≤ L .Suppose (5) is controllable, then if we arbitrarily choose ( ξ , ξ ) ∈ C × R n , there exists atrajectory x ϑ ( · ) of (5) satisfying x (0) = ξ and x ( T ) = ξ . We know there exists a solution ofthe reparametrized system (4) given by y ( · ) so that y (0) = ξ and y (1) = x ϑ ( T ), that is, thereparametrized system is controllable.We want to get the reparametrized impulsive optimal control problem. For this, it is neces-sary to define the constraints of the control u ◦ θ .5e define the set of constraints of the control u ◦ θ by U C := { ˆ u ∈ ˆ U ; ˆ u ( s ) ∈ ¯ U ⊂ B (0 , ωβ max ) , a.e. s ∈ [0 , } , where β max , ¯ U and ω are the same and ˆ U := { ˆ u ∈ L m ∞ , [0 , k ˆ u k ∞ ≤ ωβ max } .Define ˜ S C := C × U C × P , whose U C is as defined above and P is the set of all Ω := ( µ, | ν | , { ψ t i } ) that satisfies theassumptions of the system (1). We also define S C := { η ∈ ˜ S C : sup s ∈ [0 , | y η ( s ) | ≤ L } . We represent by y η ( · ) the solution of the system (4) for each η ∈ ˜ S C .We obtain the following reparametrized problem( P rep ) min η ∈ S C f ( y η (0) , y η (1)) . Note that ( P ) and ( P rep ) have the same solution, up to a reparametrization, because theobjective function is the same. So, we will get the consistent approximations for ( P rep ).The theorem below guarantees that the system (4) has an unique solution. Theorem 2.
Suppose η = ( ξ , u, Ω) is given, where ξ ∈ C , u ∈ L m ∞ , [0 , and Ω = ( µ, | ν | , ψ t i ) satisfy the assumptions of the system (1) . Then, the system defined in (4) has an unique solution.Proof. Suppose that η is given and there exist two solutions denoted by y η and y η . We have | y η ( s ) − y η ( s ) | ≤ R s (cid:16) K ′ | y η ( σ ) − y η ( σ ) || ˙ θ ( σ ) | + K ′′ | y η ( σ ) − y η ( σ ) || ˙ φ ( σ ) | (cid:17) dσ ≤ R s | y η ( σ ) − y η ( σ ) | (cid:16) K ′ b + K ′′ r (cid:17) dσ, where in the first inequality we substituted the expressions of y η ( · ) and y η ( · ) given by the system(3), and then we used the Assumption 1. By Bellman-Gronwall’s Theorem, | y η ( s ) − y η ( s ) | ≤ y η ≡ y η . We need a metric over the space S C . Consider Ω = ( µ , | µ | , ψ t i ), Ω = ( µ , | µ | , ψ t i ) ∈ P . Weneed to define a metric in the measure space P . Consider the metric given by d (Ω , Ω ) = d (Ω , Ω ) + d (Ω , Ω ) , where d ( · , · ) is a metric given by [11], d (Ω , Ω ) = | ( µ , | µ | )[0 , T ] − ( µ , | µ | )[0 , T ] | + R T | F ( t ; ( µ , | µ | )) − F ( t ; ( µ , | µ | )) | dt + max s ∈ [0 , | φ ( s ) − φ ( s ) | , and d ( · , · ) is related to the graph-convergence given by [26], d (Ω , Ω ) = Z | ˙ θ ( s ) − ˙ θ ( s ) | ds + Z | ˙ φ ( s ) − ˙ φ ( s ) | ds, θ , φ ) and ( θ , φ ) the graph completion of µ and µ , respectively.According to [11], the space P with the metric d is a metric space, and, furthermore, is thecompletion of the absolutely continuous measures given over [0 , T ] in the metric d .Note that S C ⊂ R n × L m ∞ , [0 , × P =: B . Define the metric d over B as d = d + d + d , where d is given above and d ( ξ , ξ ) = | ξ − ξ | R n and d ( u , u ) = Z | u ( s ) − u ( s ) | R m ds . We want to get consistent approximations to the problem ( P rep ). For this, define the sets N := { k } ∞ k =1 and S N := C N × L mN × P N for all N ∈ N , where C N := R n ∀ N ∈ N , L mN := { u N ∈ L m ∞ , [0 , u N ( s ) = N − X k =0 u k τ N,k ( s ) } , with u k ∈ R m and τ N,k ( s ) := ∀ s ∈ [ k/N, ( k + 1) /N [ if k ≤ N − , ∀ s ∈ [ k/N, ( k + 1) /N ] if k = N − , . and P N is given by P N := { ( µ N , | µ N | ,
0) : µ N ([0 , t ]) := F N ( t ) } , where | µ N | is the variation of the measure µ N , F N (0) = 0 and over ]0 , T ], F N ( t ) := N − X k =0 ¯ τ N,k ( t ) , ¯ τ N,k ( t ) := b k + t − ¯ t k ¯ t k +1 − ¯ t k ( b k +1 − b k ) ∀ t ∈ [¯ t k , ¯ t k +1 ] ,k = 0 , ..., N − , t < ... < ¯ t N = T, , with b k ∈ K for all k = 0 , ..., N −
1. Note that µ N is an absolutely continuous measure from[0 , T ] to K ( K is convex) for all N ∈ N . Furthermore, the graph completion of µ N is definedby θ N : [0 , → [0 , T ] and φ N : [0 , → K as θ N ( s ) := ¯ t k + s − s k h (¯ t k +1 − ¯ t k ) whenever s ∈ [ s k , s k +1 ] ,φ N ( s ) := F N ◦ θ N ( s ) , where h = 1 /N , s k = kh and k = 0 , ..., N −
1, and it satisfies:i) There exists a constant b > θ N ( · ) is Lipschitz of rank b for all N ∈ N ;ii) There exists a constant r > N →∞ k ˙ φ N ( · ) k ∞ ≤ r .Note that ∪ N ∈N C N is dense in R n and ∪ N ∈N L mN is dense in L m ∞ , [0 , S C,N := ˜ S C ∩ S N . We get some results. 7 emma 1. ∪P N is dense in P (endowed with the metric d ).Proof. For the first inclusion, let Ω ∈ ∪P N , then there exists a sequence { Ω N } N ∈N ⊂ ∪P N sothat Ω N → d Ω, that is, d (Ω N , Ω) → N → ∞ . As P is the completion of the absolutelymeasures in the metric d , Ω ∈ P . Thence, ∪P N ⊆ P . Let Ω := ( µ, | µ | , ψ t i ) ∈ P . We need to show there exists a sequence of ∪P N converging toΩ in the metric d . Let ( θ, φ ) be the gaph completion of µ . By [26], there exists a partitionof [0 , T ], 0 =: ¯ t < ¯ t < ... < ¯ t N := T , and functions θ N : [0 , → [0 , T ], F N : [0 , T ] → R n , φ N : [0 , → K , given by θ N ( s ) = ¯ t k + s − s k h (¯ t k +1 − ¯ t k ) whenever s ∈ [ s k , s k +1 ] , where h = 1 /N and s k = kh , F N ( t ; µ N ) = φ ( s k ) + t − ¯ t k ¯ t k +1 − ¯ t k ( φ ( s k +1 ) − φ ( s k )) whenever t ∈ [¯ t k , ¯ t k +1 ] ,φ N ( s ) = ( F N ◦ θ N )( s ) , and a measure given by dµ N = F N ( t ; µ N ) dt. Note that θ N ( · ) and φ N ( · ) are Lipschitz of rank b and r , respectively, the same rank of thefunctions θ ( · ) and φ ( · ), respectively. These functions satisfy the graph-convergence, Z | ˙ θ N ( s ) − ˙ θ ( s ) | ds → Z | ˙ φ N ( s ) − ˙ φ ( s ) | ds → . We have the inequality0 ≤ | φ N ( s ) − φ ( s ) | ≤ (cid:12)(cid:12)(cid:12)(cid:12)Z ( ˙ φ N ( τ ) − ˙ φ ( τ )) dτ (cid:12)(cid:12)(cid:12)(cid:12) + | φ N (0) − φ (0) | ≤ Z | ˙ φ N ( τ ) − ˙ φ ( τ ) | dτ. We can pass the limit in the last inequality and use the graph-convergence and the fact that φ N (0) = φ (0) to get max s ∈ [0 , | φ N ( s ) − φ ( s ) | → . By [26], the graph-convergence is stronger than the weak ∗ convergence, so µ N → ∗ µ . ByBanach-Steinhaus theorem, [19], as µ N → ∗ µ , there exists c > k µ N k ≤ c for all N ,where k µ N k is the total variation of the measure µ N . By Helly’s Theorem, [8], it is possibleto construct a measure ν on [0 , T ] and select from | µ N | a subsequence such that | µ N | → ∗ ν .As ( µ N , | µ N | ) → ∗ ( µ, ν ) and our measures are positives we conclude that ν = | µ | . By Lemma7.1, page 134, [2], F N ( t ; µ N ) → F ( t ; µ ) for all t ∈ Cont | µ | , where Cont | µ | denotes all points ofcontinuity of the scalar-valued measure | µ | . As the set of all points of discontinuity of | µ | hasnull Borel measure we can conclude that F N ( t ; µ N ) → F ( t ; µ ) a.e. t ∈ [0 , T ].Note that for t ∈ [0 , T ], there exists k ∈ { , ..., N − } so that t ∈ [¯ t k , ¯ t k +1 ], and | F N ( t ; µ N ) | = (cid:12)(cid:12)(cid:12)(cid:12) φ ( s k ) + t − ¯ t k ¯ t k +1 − ¯ t k ( φ ( s k +1 ) − φ ( s k )) (cid:12)(cid:12)(cid:12)(cid:12) ≤ | φ ( s k ) | ++ (cid:12)(cid:12)(cid:12)(cid:12) t − ¯ t k ¯ t k +1 − ¯ t k (cid:12)(cid:12)(cid:12)(cid:12) | ( φ ( s k +1 ) − φ ( s k )) | ≤ | φ ( s k ) | + | φ ( s k +1 ) − φ ( s k ) | . As φ ( · ) is continuous and defined over the compact set [0 , M > | φ ( s ) | ≤ M for all s ∈ [0 , | F N ( t ; µ N ) | ≤ M for all N ∈ N and t ∈ [0 , T ]. As M is8ntegrable and F N ( · ; µ N ) is absolutely continuous, then F N ( t ; µ N ) is integrable for each N ∈ N .By the Dominate Convergence Theorem, Z T | F N ( t ; µ N ) − F ( t ; µ ) | dt → , N → ∞ . Note that ν N := | µ N | is a measure from [0 , T ] to R + and | ν N | = ν N . Then we have ( ν N , | ν N | ) → ∗ ( | µ | , | µ | ). Again, by Lemma 7.1, page 134, [2], F N ( t ; ν N ) := ν N ([0 , t ]) → F ( t ; | µ | ) := | µ | ([0 , t ])for all t ∈ Cont( | µ | ). As above, F N ( t ; ν N ) → F ( t ; | µ | ) a.e. t ∈ [0 , T ]. As ν N is increasing wemust have ν N ([0 , t ]) ≤ k ν N k = k µ N k ≤ c , that is, | F N ( t ; ν N ) | ≤ c for all N ∈ N and t ∈ [0 , T ].We can use the same argument above and get Z T | F N ( t ; ν N ) − F ( t ; | µ | ) | dt → , N → ∞ . Then, 0 ≤ R T | F N ( t ; ( µ N , ν N )) − F ( t ; ( µ, | µ | )) | dt ≤ R T | F N ( t ; µ N ) − F ( t ; µ ) | dt + R T | F N ( t ; ν ) − F ( t ; | µ | ) | dt → , N → ∞ . By [24], [0 , T ] is a continuity set for any positive measure defined on [0 , T ], and Z [0 ,T ] dµ = lim N →∞ Z [0 ,T ] dµ N = ⇒ | µ N ([0 , T ]) − µ ([0 , T ]) | → . The same holds for ν N . Then, we get | ( µ N , ν N )([0 , T ]) − ( µ, | µ | )([0 , T ]) | → . By the density of the union of each set, follows that ∪ S N is dense in B . Lemma 2. ˜ S C,N → N ˜ S C , N → ∞ , where the convergence is in the sense of Kuratowski, [24].Proof. Let { η N = ( ξ N , u N , Ω N ) } N ∈N be a sequence in ˜ S C,N such that η N → d η = ( ξ , u, Ω). As C is closed, ξ ∈ C . The part that u ∈ U C is given by Proposition 4.3.1, [18]. Now, we knowthat Ω N = ( µ N , | µ N | , → d Ω = ( µ, | µ | , { ψ t i } ). As it was mentioned, P is the completion ofthe set of absolutely continuous vector-valued measures given on [0 , T ] in the metric d . As theother part of the metric d ( d ) is only completing the other one we can conclude that Ω ∈ P . ∴ lim ˜ S C,N ⊂ ˜ S C . Now, take η = ( ξ , u, Ω) ∈ ˜ S C . We must find a sequence in ˜ S C,N that converges to η in the metric d . By Proposition 4.3.1, [18], and by Lemma 1, it follows that there exists such a sequence. ∴ ˜ S C ⊂ lim ˜ S C,N . Given η = ( ξ N , u N , Ω N ) ∈ S N , we can use the Euler’s descretization to get the discretedynamic below by the continuous dynamic given by (4). In this way, take N ∈ N , h = 1 /N thestep size and s k = kh , k = 0 , ..., N . We have y ηN ( s k +1 ) − y ηN ( s k ) = f (cid:0) y ηN ( s k ) , u N ( s k ) (cid:1) ( θ N ( s k +1 ) − θ N ( s k ))+ g (cid:0) y ηN ( s k ) (cid:1) ( φ N ( s k +1 ) − φ N ( s k )) ,k = 0 , ..., N − , y ηN (0) = ξ N , (6)9here θ N : [0 , → [0 , T ] and φ N : [0 , → K are as defined in P N .We associate the function y ηN ( s ) := N − X k =0 (cid:20) y ηN ( s k ) + s − s k h (cid:0) y ηN ( s k +1 ) − y ηN ( s k ) (cid:1)(cid:21) τ N,k ( s ) , (7)where { y ηN ( s k ) } Nk =0 is the solution of the discrete system (6). Lemma 3.
Let η = ( ξ N , u N , Ω N ) ∈ S N and { y ηN ( s k ) } Nk =0 be the solution of the discretizedequation corresponding to this η . Thus, the following inequality holds | y ηN ( s k ) | + 1 ≤ e β (1 + | ξ N | ) , where β := K ( b + r ) , K is the constant relative to the linear growth of the functions f ( · , · ) and g ( · ) and, b and r are the Lipschitz constants of the functions θ N ( · ) and φ N ( · ) , respectively.Proof. This result follows from the Corollary of the Discrete Gronwall’s Lemma, [27].Define S C,N := { η ∈ ˜ S C,N : | y ηN ( s ) | ≤ L + 1 /N ∀ s ∈ [0 , } , where y ηN ( · ) is given by (7).We need to show that S C,N is converging to S C , and, after that, we can define the approxi-mated problems. Theorem 3. S C,N → N S C , N → ∞ , where the convergence is in the sense of Kuratowski.Proof. Let { η N = ( ξ N , u N , Ω N ) } N ∈N be a sequence in S C,N such that η N → d η = ( ξ , u, Ω). As S C,N ⊂ ˜ S C,N , by Lemma 2, η ∈ ˜ S C . We know that | y ηN ( s ) | ≤ L + 1 /N for all s ∈ [0 , K ⊂ N so that y η N N ( · ) uniformly converges to y η ( · ) in K , then, given ε = 1 /N , there exists N ∈ N such that for all N ≥ N , N ∈ K we have | y η N N ( s ) − y η ( s ) | ≤ /N = ⇒ | y η ( s ) | ≤ | y η N N ( s ) | + 1 /N → L. ∴ lim S C,N ⊂ S C . Now, let η = ( ξ , u, Ω) ∈ S C . By Lemma 2, there exists a sequence { η N = ( ξ N , u N , Ω N ) } N ∈N ∈ ˜ S C,N so that η N → d η . Again, by Theorem 4 there exists K ⊂ N so that y η N N ( · ) uniformlyconverges to y η ( · ) in K , then, given ε = 1 /N there exists N ∈ N such that for all N ≥ N , N ∈ K we have | y η N N ( s ) − y η ( s ) | ≤ /N = ⇒ | y η N N ( s ) | ≤ | y η ( s ) | + 1 /N ≤ L + 1 /N. ∴ S C ⊂ lim S C,N . Then, we get the approximated problems( P C,Nrep ) min η ∈ S C,N f N ( y ηN (0) , y ηN (1)) , where f N ( y ηN (0) , y ηN (1)) := f ( ξ N , y ηN (1)). 10 Consistent Approximations
In this section, we show that the problems ( P C,Nrep ) with some optimality functions γ C,N ( · ) areconsistent approximations to the pair ( P rep , γ ), where γ ( · ) is an optimality function to theproblem ( P rep ).We begin with a theorem that gives us a convergence between the polygonal arc given byEuler’s discretization and the solution of the reparametrized problem. Note that η N ∈ ˜ S C,N isarbitrary and is converging to η ∈ ˜ S C in the metric d . This means that the convergence in themetric d is enough to guarantee the convergence between the solutions. Theorem 4.
Suppose that the Assumption 1 holds, N ∈ N and η N → d η , where η N ∈ ˜ S C,N and η ∈ ˜ S C . Thus, there exists K ⊂ N such that y η N N ( · ) uniformly converge to y η ( · ) , with N ∈ K , N → ∞ , where y η N N ( · ) is defined in (7) and y η ( · ) is the solution of (4) .Proof. Note that, over the interval [ s k , s k +1 ] we have | ˙ y η N N ( s ) | ≤ K ( b + r ) | y η N N ( s k ) | + K ( b + r ) =: β | y η N N ( s k ) | + β, (8)and by Lemma 3, | y η N N ( s k ) | ≤ e β (cid:0) | ξ N | + 1 (cid:1) − , k ∈ { , ..., N − } . As ξ N is convergent, it follows that there exists M > | ξ N | ≤ M ∀ N ∈ N . So, | y η N N ( s k ) | ≤ e β M . By equation (8), ˙ y η N N ( s ) is uniformly bounded. Using the same argument wehave that y η N N ( s ) is uniformly bounded too. By Arzel` a -Ascoli’s Theorem, there exist K ⊂ N and y : [0 , → R n such that y η N N ( · ) uniformly converge to y ( · ), N ∈ K , N → ∞ .Now, we need to show that y ( · ) satisfies the system (4). For this, define y η ( s ) = f ( y ( s ) , u ( s )) ˙ θ ( s ) + g ( y ( s )) ˙ φ ( s ) , y η (0) = ξ . For s ∈ [ s k , s k +1 ], | y η N N ( s ) − y η ( s ) | ≤ (cid:12)(cid:12)R s (cid:0) ˙ y η N N ( σ ) − ˙ y η ( σ ) (cid:1) dσ (cid:12)(cid:12) + | ξ N − ξ |≤ P k − j =0 R s j +1 s j (cid:12)(cid:12) ˙ y η N N ( σ ) − ˙ y η ( σ ) (cid:12)(cid:12) dσ + R ss k (cid:12)(cid:12) ˙ y η N N ( σ ) − ˙ y η ( σ ) (cid:12)(cid:12) dσ + | ξ N − ξ |≤ P k − j =0 R s j +1 s j | f ( y η N N ( s j ) , u N ( s j )) − f ( y ( σ ) , u ( σ )) || ˙ θ N ( σ ) | dσ + P k − j =0 hR s j +1 s j | f ( y ( σ ) , u ( σ )) | (cid:12)(cid:12)(cid:12) ˙ θ N ( σ ) − ˙ θ ( σ ) (cid:12)(cid:12)(cid:12) dσ + R s j +1 s j | g ( y ( σ )) | (cid:12)(cid:12)(cid:12) ˙ φ N ( σ ) − ˙ φ ( σ ) (cid:12)(cid:12)(cid:12) dσ i + P k − j =0 R s j +1 s j | g ( y η N N ( s j )) − g ( y ( σ )) || ˙ φ N ( σ ) | dσ + R ss k | ˙ y η N N ( σ ) − ˙ y η ( σ ) | dσ + | ξ N − ξ | = P k − j =0 [ I + II + III + IV ] + R ss k | ˙ y η N N ( σ ) − ˙ y η ( σ ) | dσ + V. Let’s check that there exists
K ⊂ N such that the equation above converge to zero whenever N ∈ K .- For I , as f ( · , · ) is Lipschitz, I ≤ R s j +1 s j K ′ b (cid:0) | y η N N ( s j ) − y ( σ ) | + | u N ( s j ) − u ( σ ) | (cid:1) dσ ≤ bK ′ R s j +1 s j (cid:0) | y η N N ( s j ) − y η N N ( σ ) | (cid:1) dσ + bK ′ R s j +1 s j (cid:0) | y η N N ( σ ) − y ( σ ) | + | u N ( s j ) − u N ( σ ) | + | u N ( σ ) − u ( σ ) | (cid:1) dσ, | ˙ θ N ( s ) | ≤ b , for some b > y η N N ( · ) is Lipschitz. Let’s denote its Lipschitz constant by κ >
0. Then, Z s j +1 s j | y η N N ( s j ) − y η N N ( σ ) | dσ ≤ Z s j +1 s j κ | s j − σ | dσ ≤ κh → . As y η N N ( · ) uniformly converge to y ( · ) we have that | y η N N ( σ ) − y ( σ ) | →
0. As every sequenceuniformly convergent is bounded, there exists c > | y η N N ( σ ) − y ( σ ) | ≤ c ∀ N, ∀ σ ∈ [0 , , then, Z s j +1 s j | y η N N ( σ ) − y ( σ ) | dσ ≤ ch → . As u N ( s ) = u N ( s j ) for all s ∈ [ s j , s j +1 [ we get Z s j +1 s j | u N ( s j ) − u N ( σ ) | dσ → . We know u N ( σ ) → d u ( σ ). By H¨ o lder’s inequality, we get u N ( σ ) → u ( σ ) in L m ([0 , II , as y η N N ( · ) uniformly converge to y ( · ), given ε = 1, there exists N ∈ N so that forall N ≥ N , | y ( s ) − y η N N ( s ) | < s ∈ [0 , | y ( s ) | < | y η N N ( s ) | < ˆ M , for some ˆ M > y η N N ( · ) is uniformly bounded. As f has linear growth, | f ( y ( σ ) , u ( σ )) | ≤ K (1 + | y ( s ) | ) ≤ K (1 + ˆ M ) . By the convergence of η N in the metric d , we have0 ≤ Z s j +1 s j | ˙ θ N ( s ) − ˙ θ ( s ) | ds ≤ Z | ˙ θ N ( s ) − ˙ θ ( s ) | ds → . - III and IV are completely analogous to II and I , respectively.- As η N → d η , then there exists the convergence between the initials conditions, then we havethe convergence of V . The last integral is totally analogous to the one that we just showed.In the same way that we did in the last theorem, we can prove the same result when thesequence of η ′ s belongs to the set S C and also the point of convergence and when both of thembelong to the set S C,N . Proposition 1. a) Let { η N = ( ξ N , u N , Ω N ) } N ∈N ⊂ S C be a sequence so that η N → d η , where η ∈ S C . Thus, there exists K ⊆ N such that y η N ( · ) uniformly converge to y η ( · ) when N → ∞ , N ∈ K , where y η N ( · ) and y η ( · ) are the solution of the system (4) related to η N and η , respectively.b) Let { η N = ( ξ N , u N , Ω N ) } N ∈N ⊂ S C,N be so that η N → d η , η ∈ S C,N . Let y η N N ( · ) and y η ( · ) be the polygonal arc given by the Euler’s discretization, equation (7) . Thus, there exists K ⊆ N such that y η N N ( · ) uniformly converge to y η ( · ) when N → ∞ , N ∈ K . Observation 1.
Note that Theorem 4 holds when we change ˜ S C and ˜ S C,N by S C and S C,N ,respectively. The proof follows from Theorem 4 and the proof of Theorem 3.
The following Lemma is very important to the next result.
Lemma 4.
Let ϕ : S C → R be given by ϕ (¯ η ) = h∇ f ( ξ ) , ¯ ξ − ξ i + 12 ¯ d (( ξ , u, Ω) , ( ¯ ξ , ¯ u, ¯Ω)) for each η ∈ S C fixed and ξ = ( ξ , y η (1)) ∈ C × R n . Then there exists ˆ η ∈ S C such that ϕ (ˆ η ) = min ¯ η ∈ S C ϕ (¯ η ) . roof. Let α = inf ˆ η ∈ S C ϕ (ˆ η ), then by the definition of infimum, there exists α N = ϕ ( η N ) so that α N → α (in R ), and η N ∈ S C for all N ∈ N . As α N is a convergent sequence in R , it mustexists M > | α N | ≤ M for all N ∈ N , that is h∇ f ( ξ ) , ξ N − ξ i + 12 ¯ d (( ξ , u, Ω) , ( ξ N , u N , Ω N )) ≤ M. As η N ∈ S C for all N ∈ N , we must have sup s ∈ [0 , | y η N ( s ) | ≤ L for all N ∈ N , then ξ N isuniformly bounded, that is, h∇ f ( ξ ) , ξ N − ξ i is uniformly bounded. We have d ( ξ , ξ N ) ≤ M , d ( u N , u ) ≤ M and d (Ω N , Ω) ≤ M , for some M > • d ( ξ , ξ N ) ≤ M = ⇒ | ξ N | ≤ M , M is some positive constant. Then there exist K ⊂ N and ¯ ξ ∈ C so that ξ N → ¯ ξ ; • d ( u N , u ) ≤ M = ⇒ R | u N ( s ) | ≤ M if we use Minkoviski’s inequality, M is somepositive constant. By the sequential compactness in L m ([0 , K ⊂ K and¯ u ∈ L m ([0 , Z h u N ( s ) − ¯ u ( s ) , h ( s ) i ds → ∀ h ∈ L m ([0 , , N ∈ K . We need to show that ¯ u ∈ U C . We know u N ( s ) ∈ U a.e., for all N ∈ K . Define W := { ω ∈ L m ([0 , ω ( t ) ∈ U a.e. t ∈ [0 , } . Then W is strongly closed in L m ([0 , U is closed by assumption. W is convex because U is convex. By Theorem III.7, [3], W is weakly closed. As ¯ u is the weak limit of the sequence u N , ¯ u belongs to W , and then ¯ u ∈ U C . • d (Ω N , Ω) ≤ M .By a statement given by [11], page 7105, there exist K ⊂ K and ¯Ω ∈ P so that d (Ω N , ¯Ω) → , N ∈ K .Then, when N ∈ K , we have1) ξ N → d ¯ ξ ;2) u N → ¯ u weakly in L m ([0 , N → d ¯Ω.By theorem 6.1, [11], if f is linear in u , 1), 2) and 3) hold and sup s ∈ [0 , | y η N ( s ) | ≤ L , we canstill apply Lemma 3.2, [11], and get that y η N (1) → y ¯ η (1), where y ¯ η ( · ) is the trajectory of thereparametrized system related to ¯ η = ( ¯ ξ , ¯ u, ¯Ω). Moreover, sup s ∈ [0 , | y η N ( s ) | → sup s ∈ [0 , | y ¯ η ( s ) | ,and as sup s ∈ [0 , | y η N ( s ) | ≤ L , we must have that sup s ∈ [0 , | y ¯ η ( s ) | ≤ L . Then, ¯ η ∈ S C .We have strong convergence in C and P but we have weak convergence in L m ([0 , d : U C → R and U C ⊂ L m ∞ , ([0 , ⊂ L m ([0 , L m ([0 , λ be so that there exists ˆ u ∈ U C satisfying d ( u, ˆ u ) = λ . Define A := { ˜ u ∈ U C : d ( u, ˜ u ) ≤ λ } . As U C and d are convex, A is convex. 13f we take a sequence { ˜ u N } N ∈N in A so that ˜ u N → d ˜˜ u , we must have that ˜ u N ∈ U C for all N ∈ N and d (˜ u N , u ) ≤ λ . As d is continuous we have that d (˜˜ u, u ) ≤ λ . We need to showthat ˜˜ u ∈ U C . In the same way we showed ¯ u ∈ W , we can get that ˜˜ u ∈ U C . Then, A is stronglyclosed. By Theorem III.7, [3], A is weakly closed. In particular we have that if u N → ¯ u weaklyin L m ([0 , d ( u, ¯ u ) ≤ lim inf N →∞ d ( u N , u ) . We can write ϕ (¯ η ) = h∇ f ( ξ ) , ¯ ξ − ξ i + d ( ξ , ¯ ξ ) + d ( u, ¯ u ) + d (Ω , ¯Ω) ≤ lim N →∞ (cid:2) h∇ f ( ξ ) , ξ N − ξ i + d ( ξ N , ξ ) + d (Ω N , Ω) (cid:3) + lim inf N →∞ d ( u N , u )= lim inf N →∞ (cid:2) h∇ f ( ξ ) , ξ N − ξ i + d ( ξ N , ξ ) + d ( u N , u ) + d (Ω N , Ω) (cid:3) = lim inf N →∞ ϕ ( η N ) = α. Therefore, ϕ achieves its minimum over S C .The same result can be proved when we change the domain of ϕ by S C,N .The next result provides the optimality functions to the problems ( P rep ) and ( P C,Nrep ). Theorem 5.
Suppose that Assumption 1 holds. The following statements are satisfied: a) Let γ ( η ) := min ¯ η ∈ S C (cid:18) h∇ f ( ξ ) , ¯ ξ − ξ i + 12 ¯ d (( ξ , u, Ω) , ( ¯ ξ , ¯ u, ¯Ω)) (cid:19) , with ξ := ( ξ , y η (1)) , ¯ ξ := ( ¯ ξ , y ¯ η (1)) , ¯ d = d + d + d and γ : S C → R .i) If ¯ η ∈ S C is a local minimizer of ( P rep ) , then h∇ f ( ¯ ξ ) , ξ − ¯ ξ i ≥ ∀ η ∈ S C ; ii) γ ( η ) ≤ ∀ η ∈ S C ;iii) If ¯ η ∈ S C is a minimum of ( P rep ) , then γ (¯ η ) = 0 ;iv) γ ( · ) is upper semicontinuous.Thus, we can conclude that γ is an optimality function to the problem ( P rep ) .b) Let γ C,N ( η ) = min ¯ η ∈ S C,N (cid:18) h∇ f N ( ξ ) , ¯ ξ − ξ i + 12 ¯ d (( ξ , u, Ω) , ( ¯ ξ , ¯ u, ¯Ω)) (cid:19) , with γ C,N : S C,N → R .i) If ¯ η ∈ S C,N is a local minimizer of ( P C,Nrep ) , then h∇ f N ( ¯ ξ ) , ξ − ¯ ξ i ≥ ∀ η ∈ S C,N ; ii) γ C,N ( η ) ≤ ∀ η ∈ S C,N ;iii) If ¯ η ∈ S C,N is minimum of ( P C,Nrep ) , then γ C,N (¯ η ) = 0 ;iv) γ C,N ( · ) is upper semicontinuous;Thus, we can conclude that γ C,N is an optimality function to the problem ( P C,Nrep ) .Proof. a). i) Suppose that ¯ η ∈ S C is a minimizer of ( P rep ) and there exists η ∈ S C so that h∇ f ( ¯ ξ ) , ξ − ¯ ξ i < . (9)As C is convex, ¯ ξ + λ ( ξ − ¯ ξ ) ∈ C × R n ∀ λ ∈ [0 , λ ∈ (0 ,
1] such that f ( ¯ ξ + ¯ λ ( ξ − ¯ ξ )) − f ( ¯ ξ ) ≤ ¯ λ h∇ f ( ¯ ξ ) , ξ − ¯ ξ i < , ξ = ¯ ξ + ¯ λ ( ξ − ¯ ξ ) and ξ = y ¯ η (1) + ¯ λ ( y η (1) − y ¯ η (1)) be the fixed initial and final pointsof the reparametized system (4). As such system is controllable it must exists a solution of thereparametrized system y ( · ) satisfying y (0) = ξ and y (1) = ξ , i.e., f ( ¯ ξ + ¯ λ ( ξ − ¯ ξ )) < f ( ¯ ξ ) . This is a contradiction with the assumption.ii) Note that γ ( η ) ≤ (cid:18) h∇ f ( ξ ) , ξ − ξ i + 12 ¯ d (( ξ , u, Ω) , ( ξ , u, Ω)) (cid:19) = 0 ∀ η ∈ S C , because as γ ( η ) is the minimum, it is smaller than when calculated in η .iii) Suppose that ¯ η ∈ S C is a minimizer of ( P rep ). We have0 ≥ γ (¯ η ) ≥ min η ∈ S C h∇ f ( ¯ ξ ) , ξ − ¯ ξ i ≥ , where in the first and third inequality we used the items ii) and i), respectively.iv) Let { η N } N ∈N be a sequence of S C so that η N → d η, N ∈ N , and K ⊂ N such thatlim γ ( η N ) = lim N ∈K γ ( η N ) . By Proposition 1 part a), there exists ¯
K ⊂ K such that y η N N (1) → y η (1) whenever N ∈ ¯ K . Then,by the definition of γ ( · ) and Assumption 1, it follows that lim N ∈ ¯ K γ ( η N ) = γ ( η ). But, as thelimit lim N ∈K γ ( η N ) exists, we must have that all subsequence are converging to the same point,that is, lim γ ( η N ) = γ ( η ) . Part b) is totally analogous to the part a).
Theorem 6.
Suppose that Assumption 1 holds. Then, { ( P C,Nrep , γ
C,N ) } N ∈N is a sequence ofconsistent approximations to the pair ( P rep , γ ) .Proof. To get this result, we need to show that the problems ( P C,Nrep ) epi-converge to ( P rep ) andlim γ C,N ( η N ) ≤ γ ( η ), when η N → d η , N → ∞ , N ∈ N .Epi-convergence;i) Let η ∈ S C be arbitrary. By the construction of S C,N itself and by the proof of Theorem 3,there exists { η N } N ∈N such that η N ∈ S C,N , for all N ∈ N , and η N → d η , N → ∞ . Let K ⊂ N be such that lim f N ( ξ N , y η N N (1)) = lim N ∈K f N ( ξ N , y η N N (1)) . By Observation 1, there exist K ′ ⊂ K such that y η N N (1) → y η (1), N ∈ K ′ . Then, we havelim N ∈K ′ f N ( ξ N , y η N N (1)) = f ( ξ , y η (1)) , because of Assumption 1. It follows thatlim N ∈K f N ( ξ N , y η N N (1)) = f ( ξ , y η (1)) , because if the limit exist, all subsequences must converge to the same point. This gives uslim f N ( ξ N , y η N N (1)) = f ( ξ , y η (1)) . { η N } N ∈K be a sequence such that η N ∈ S C,N for all N ∈ K and η N → d η , N → ∞ . ByTheorem 3, we should have η ∈ S C . Take ¯ K ⊂ K such thatlim N ∈K f ( ξ N , y η N N (1)) = lim N ∈ ¯ K f ( ξ N , y η N N (1)) . As η N → d η , N ∈ ¯ K , it follows from Observation 1 that there exist ¯¯ K ⊂ ¯ K such that y η N N (1) → y η (1), N ∈ ¯¯ K . By the same arguments used in the proof of the item i), it follows thatlim N ∈K f ( ξ N , y η N N (1)) = f ( ξ , y η (1)). ∴ P C,Nrep → Epi P rep . Now, let { η N } N ∈N be a sequence such that η N ∈ S C,N for all N ∈ N and it converges to η ∈ S C .We must show that lim γ C,N ( η N ) ≤ γ ( η ), N → ∞ . By Theorem Lemma 4, there exists ¯ η ∈ S C such that γ ( η ) = h∇ f ( ξ , y η (1)) , ( ¯ ξ , y ¯ η (1)) − ( ξ , y η (1)) i + 12 ¯ d (( ξ , u, Ω) , ( ¯ ξ , ¯ u, ¯Ω)) . Let
K ⊂ N be such that lim γ C,N ( η N ) = lim N ∈K γ C,N ( η N ). By Lemma 2, there exists { ¯ η N } N ∈K so that ¯ η N → ¯ η , and ¯ η N ∈ S C,N for all N ∈ K . By the definition of γ C,N ( · ), γ C,N ( η N ) ≤ h∇ f ( ξ N , y η N N (1)) , ( ξ N , y η N N (1)) − ( ¯ ξ N , y ¯ η N N (1)) i + ¯ d (( ξ N , u N , Ω N ) , ( ¯ ξ N , ¯ u N , ¯Ω N )) . By Observation 1, there exists
K ⊂ K such that y η N N (1) → y η (1), and y ¯ η N N (1) → y ¯ η (1), N ∈ K ,then passing the limit with N → ∞ , N ∈ K , in the last inequality, we getlim N ∈K γ C,N ( η N ) ≤ h∇ f ( ξ , y η (1)) , ( ξ , y η (1)) − ( ¯ ξ , y ¯ η (1)) i + ¯ d (( ξ , u, Ω) , ( ¯ ξ , ¯ u, ¯Ω)) = γ ( η ) , which give us lim γ C,N ( η N ) ≤ γ ( η ) . The next Theorem shows that a local (respectively, global) minimizers sequence of ( P C,Nrep )that has a convergent subsequence is converging to a local (respectively, global) minimizer of( P rep ). Theorem 7.
Let ( P C,Nrep ) and ( P rep ) be defined as before. Let { η N } N ∈N be a sequence of local(respectively, global) minimizers of ( P C,Nrep ) such that η N → d η , with N → ∞ and η ∈ S C .Then η is a local (respectively, global) minimizer of ( P rep ) and there exists K ⊂ N such that f N ( ξ N , y η N N (1)) → f ( ξ , y η (1)) , with N → ∞ , N ∈ K .Proof. Let { η N } N ∈N be a sequence of local minimizers of the problems ( P C,Nrep ), that is, there ex-ists ε > η ∈ S C,N satisfying d ( η N , ˆ η ) ≤ ε we have f N ( ξ N , y η N N (1)) ≤ f N ( ˆ ξ , y ˆ η (1)),such that η N → d η . By Observation 1 and Assumption 1, there exists K ⊂ N such that y η N N → y η uniformly and f N ( ξ N , y η N N (1)) → f ( ξ , y η (1)), with N ∈ K , N → ∞ . We need to show that η is a local minimizer to the problem ( P rep ). Let’s suppose that η is not a local minimizer, thengiven ε > η ∈ S C with d ( η, ¯ η ) < ε/ f ( ¯ ξ , y ¯ η (1)) = f ( ξ , y η (1)) − ε, where y ¯ η ( · ) is the associated trajectory to ¯ η .As ( P C,Nrep ) epi-converge to ( P rep ) and η N → d η with η N ∈ S C,N , we havelim N ∈K f N ( ξ N , y η N N (1)) ≥ f ( ξ , y η (1)) . (10)16y the epi-convergence again, there exists a sequence { ¯ η N } N ∈N in S C,N such that ¯ η N → d ¯ η andlim N ∈K f N ( ¯ ξ N , y ¯ η N N (1)) ≤ lim f N ( ¯ ξ N , y ¯ η N N (1)) ≤ f ( ¯ ξ , y ¯ η (1)) . (11)Let K ′ ⊂ K be such thatlim N ∈K f N ( ξ N , y η N (1)) = lim N ∈K ′ f N ( ξ N , y η N (1)) . As η N → d η , given ε/ N ∈ K ′ so that d ( η N , η ) ≤ ε/ N ≥ N , N ∈ K ′ . As¯ η N → d ¯ η , given ε/ N ∈ K ′ so that d (¯ η N , ¯ η ) < ε/ N ≥ N , N ∈ K ′ . Let N = max { N , N } and N ≥ N , N ∈ K ′ , then d (¯ η N , η N ) ≤ d ( η N , η ) + d ( η, ¯ η ) + d (¯ η, ¯ η N ) ≤ ε/ ε/ ε/ ε, and there exists N ∈ K ′ so that for all N ≥ N , N ∈ K ′ ,(11) ⇒ f N ( ¯ ξ N , y ¯ η N (1)) ≤ f ( ¯ ξ , y ¯ η (1)) + ε = f ( ξ , y η (1)) − ε. (10) ⇒ f N ( ξ N , y η N N (1)) ≥ f ( ξ , y η (1)) − ε. It follows that f N ( ¯ ξ N , y ¯ η N N (1)) ≤ f N ( ξ N , y η N N (1)) − ε for all N ≥ N , N ∈ K ′ , where N = max { N , N } . Contradiction.Suppose { η N } N ∈ N ⊂ S C,N is a sequence of global minimizers of ( P C,Nrep ) that is converging to η ∈ S C in the metric d . By Theorem 7, η is a global minimizer of ( P rep ). Then y η ( · ) given by(4) is the function that minimizes ( P rep ).Define π : [0 , T ] → [0 ,
1] by π ( t ) = t + | µ | ([0 , t ]) T + k µ k , and x : [0 , T ] → R n by x ( t ) = y η ( π ( t )) . Because of Theorem 1, as y η ( · ) is a solution of the system (4), then x ( · ) is a solution of theoriginal system (1) related to p = ( µ, | µ | , ψ t i ) and ¯ u : [0 , T ] → R m given by¯ u ( t ) = u ( π ( t )) . As y η ( · ) minimizes the reparametrized problem ( P rep ) and f ( ξ , x ( T )) = f ( ξ , y η (1)), we havethat x ( · ) minimizes ( P ).Now, we can show that a subsequence of discrete-time approximated functions graph-convergesto a solution. Theorem 8.
Suppose η N → d η and define Λ N := { ( t k , y k ) : k = 0 , ..., N } , and X µ := ( x ( · ) , φ ( · ) , {X t i } t i ∈ Θ ) , where y k = y η N N ( s k ) , t k = θ N ( s k ) , k = 0 , ..., N , x ( · ) is defined as above andgr X µ := { ( t, x ( t )) : t ∈ [0 , T ] } ∪ { ( t i , y ( s )) : s ∈ I i , i ∈ I} . Then, there exists
K ⊂ N so thatdist H (Λ N , gr X µ ) → as N → ∞ , N ∈ K , where the Hausdorff distance between two compact subsets A, B ⊂ R m is given bydist H ( A, B ) = min { δ ≥ A ⊆ B + δB [0 , and B ⊆ A + δB [0 , } . roof. For each N ∈ N , define ˜Λ N := { ( s k , y k ) : k = 0 , ..., N } . Note that as η N → d η , by Observation 1, there exists K ⊂ N so that y η N N → y η uniformly when N ∈ K , then we have dist H (gr y η N N , gr y η ) → N → ∞ , N ∈ K . Observe the second coordinates of Λ N and ˜Λ N are equal for each k = 0 , ..., N . In the same way,the second coordinates of gr X µ and gr y are the same for each t / ∈ Θ, and when t ∈ Θ, the set ofprojections onto the second coordinate are the same. Then, we have the following:dist H (Λ N , gr X µ ) ≤ b dist H ( ˜Λ N , gr y ) , where b is the Lipschitz constant of θ N ( · ). We can getdist H ( ˜Λ N , gr y ) ≤ dist H ( ˜Λ N , gr y η N N ) + dist H (gr y η N N , gr y ) . Passing the limit when N ∈ K in the last inequality we get the desired result. In this section we give a bound on approximation errors between the linear interpolation of thesequence of Euler points and a trajectory of the reparametrized problem and also between theobjective function of the approximated problem and the reparametrized problem.
Observation 2.
Note that Picard Lemma 5.6.3, [18], holds if we define h : R n × R m × R q +1 → R n as h ( x, u, Ω) = f ( x, u ) ˙ θ ( s ) + g ( x ) ˙ φ ( s ) a.e. s ∈ [0 , , since h ( · , · , · ) is Lipschitz related to the first variable. Let x, y ∈ R n , u ∈ R m and Ω ∈ R q +1 ,then | h ( x, u, Ω) − h ( y, u, Ω) | ≤ | f ( x, u ) − f ( y, u ) || ˙ θ ( s ) | + || g ( x ) − g ( y ) || ˙ φ ( s ) | ≤ ( bK ′ + rK ′′ ) | x − y | , since f and g are Lipschitz in the first variable. Theorem 9.
Suppose that Assumption 1 holds, N ∈ N , and S ⊂ B is a bounded set. Thenthere exists a constant K ξ > such that, for any η = ( ξ , u, Ω) and ˆ η = ( ˆ ξ , u, Ω) ∈ S ∩ S N ,which differ only in the initial state, | y ˆ ηN ( s ) − y η ( s ) | ≤ K ξ ( | ξ − ˆ ξ | + 1 /N ) , where y ˆ ηN ( · ) is the linear interpolation between the sequence of points we got in the Euler dis-cretization of the reparametrized system.Proof. Suppose that N ∈ N , η and ˆ η are given and y η ( · ) and y ˆ ηN ( · ) are as above. Because ofPicard Lemma we have | y ˆ ηN ( s ) − y η ( s ) | ≤ | ξ − ˆ ξ | + Z | ˙ y ˆ ηN ( s ) − h ( y ˆ ηN ( s ) , u ( s ) , Ω) | ds =: | ξ − ˆ ξ | + e ( y ˆ ηN , η ) , where h is defined as in Observation 2.If we substitute the expressions of them we get e ( y ˆ ηN , η ) ≤ N − X k =0 Z s k +1 s k (cid:20) | f ( y ˆ ηN ( s k ) , u ( s k )) − f ( y ˆ ηN ( s ) , u ( s k )) | (cid:12)(cid:12)(cid:12)(cid:12) θ ( s k +1 ) − θ ( s k ) h (cid:12)(cid:12)(cid:12)(cid:12)(cid:21) ds N − X k =0 Z s k +1 s k | f ( y ˆ ηN ( s ) , u ( s k )) | (cid:12)(cid:12)(cid:12)(cid:12) ˙ θ ( s ) − θ ( s k +1 ) − θ ( s k ) h (cid:12)(cid:12)(cid:12)(cid:12) ds + N − X k =0 Z s k +1 s k (cid:20) | g ( y ˆ ηN ( s k )) − g ( y ˆ ηN ( s )) | (cid:12)(cid:12)(cid:12)(cid:12) φ ( s k +1 ) − φ ( s k ) h (cid:12)(cid:12)(cid:12)(cid:12)(cid:21) ds + N − X k =0 Z s k +1 s k | g ( y ˆ ηN ( s )) | (cid:12)(cid:12)(cid:12)(cid:12) ˙ φ ( s ) − φ ( s k +1 ) − φ ( s k ) h (cid:12)(cid:12)(cid:12)(cid:12) =: I + II + III + IV.
We need to bound
I, II, III and IV .For I + IV , as f ( · , · ) is Lipschitz of rank K ′ , g ( · ) is Lipschitz of rank K ′′ , θ ( · ) is Lipschitz ofrank b , and φ ( · ) is Lipschitz of rank r , we have I + IV ≤ N − X k =0 Z s k +1 s k ( bK ′ + rK ′′ ) | y ˆ ηN ( s k ) − y ˆ ηN ( s ) | ds. If we substitute the expression of y ˆ ηN ( s ) and define ρ := bK ′ + rK ′′ , I + IV ≤ ρ N − X k =0 Z s k +1 s k | N ( s − s k )( y ˆ ηN ( s k +1 ) − y ˆ ηN ( s k ) | ds, that is, I + IV ≤ ρ N − X k =0 Z s k +1 s k N β ( s − s k ) | hK ( b + r )( | y ˆ ηN ( s k ) | + 1) | ds. In the last inequality we just substituted the expression of y ˆ ηN ( s k +1 ) − y ˆ ηN ( s k ) given by the Eulerdiscretization, and we used the fact that f ( · , · ) and g ( · ) have linear growth in the first variable.Integrating, I + IV ≤ ρβ N − X k =0 N ( | y ˆ ηN ( s k ) | + 1) ≤ ρβ ˆ K ξ N , where ˆ K ξ := sup {| ˆ ξ | + 1 : ( ˆ ξ , u, Ω) ∈ S } , β := K ( b + r ) and in the last inequality we usedLemma (tese).Now, define Φ N : [0 , → R asΦ N ( s ) := max (cid:26)(cid:12)(cid:12)(cid:12)(cid:12) ˙ θ ( s ) − θ ( s k +1 ) − θ ( s k ) h (cid:12)(cid:12)(cid:12)(cid:12) , (cid:12)(cid:12)(cid:12)(cid:12) ˙ φ ( s ) − φ ( s k +1 ) − φ ( s k ) h (cid:12)(cid:12)(cid:12)(cid:12)(cid:27) . We know Φ N ( s ) → N → ∞ . Then, given ε = 1 /N , there exists N ∈ N so that for all N ≥ N we have | Φ N ( s ) | ≤ /N. (12)If N ≥ N , inequality (12) holds. If N < N then define M := max n ∈{ ,...,N } { Φ n ( s ) } . We have, | Φ N ( s ) | ≤ M ≤ M N N =: MN . (13)For II + III , as f ( · , · ) and g ( · ) have linear growth in the first variable, II + III ≤ N − X k =0 Z s k +1 s k K ( | y ˆ ηN ( s ) | + 1)Φ N ( s ) ds ≤ N − X k =0 Z s k +1 s k K ( β + 1) e β (1 + | ˆ ξ | )Φ N ( s ) ds, y ˆ ηN ( s ) and used Lemma (tese). Then, II + III ≤ K ( β + 1) e β (1 + | ˆ ξ | ) Z Φ N ( s ) ds ≤ K ( β + 1) e β ˆ K ξ MN , because of inequality (13).Now, define K ξ := max { , ρβ ˆ K ξ , K ( β + 1) e β ˆ K ξ M } , then we have the result | y ˆ ηN ( s ) − y η ( s ) | ≤ K ξ ( | ξ − ˆ ξ | + 1 /N ) . The following theorem is given us the error between the objective function.
Theorem 10.
Suppose that Assumption 1 holds. Then, for every bounded subset S ⊂ B , thereexists a constant K S > so that for any η ∈ S ∩ S N and k = 0 , ..., N , | f ( ξ , y η ( s k )) − f ( ξ , y ηN ( s k )) | ≤ K S /N, where y η ( · ) is the solution of the reparametrized system and y ηN ( · ) is the linear interpolation ofthe points we got in that Euler discretization.Proof. As f ( · , · ) is Lipschitz of rank L we have | f ( ξ , y η ( s k )) − f ( ξ , y ηN ( s k )) | ≤ L | y η ( s k ) − y ηN ( s k ) | ≤ L K ξ /N =: K S /N, where in the last inequality we used the previous theorem. We study an impulsive optimal control problem in which we show we can get approximatedproblems to the reparametrized problem by Euler’s discretization and if the sequence of approx-imated problems converge, it will converge to a solution of the reparametrized problem. We alsoshow that a subsequence of discrete-time approximated functions graph-converges to a solution.The results obtained come from a mix of ideas from consistent approximations given by [18] andEuler approximation and graph-convergence for impulsive differential inclusion given by [26].We are contributing with the literature about numerical methods for impulsive optimal controlproblems.
Grant 2011/14121-9, 2014/05558-2 and 2013/07375-0, S˜ao Paulo Research Foundation (FAPESP).
References [1]
A. V. Arutyunov, D. Y. Karamzin, and F. D. Pere˘ıra , On impulsive controlproblems with constraints: control of jumps of systems, no. 65, Matematicheskaya Fizika,Kombinatorika i Optimalnoe Upravlenie, 2009.[2]
A. V. Arutyunov and L. F. Pere˘ıra , Necessary extremum conditions without a priorinormality assumptions, Dokl. Akad. Nauk, 402 (2005), pp. 727–731.203]
H. Brezis , Analyse fonctionnelle, Collection Math´ematiques Appliqu´ees pour la Maˆıtrise.[Collection of Applied Mathematics for the Master’s Degree], Masson, Paris, 1983. Th´eorieet applications. [Theory and applications].[4]
F. Camilli and M. Falcone , Approximation of control problems involving ordinary andimpulsive controls, ESAIM Control Optim. Calc. Var., 4 (1999), pp. 159–176 (electronic).[5]
A. L. Dontchev and W. W. Hager , The Euler approximation in state constrainedoptimal control, Math. Comp., 70 (2001), pp. 173–203.[6]
A. L. Dontchev, W. W. Hager, and K. Malanowski , Error bounds for Eulerapproximation of a state and control constrained optimal control problem, Numer. Funct.Anal. Optim., 21 (2000), pp. 653–682.[7]
A. L. Dontchev, W. W. Hager, and V. M. Veliov , Second-order Runge-Kuttaapproximations in control constrained optimal control, SIAM J. Numer. Anal., 38 (2000),pp. 202–226.[8]
E. B. Dynkin , Markov processes and related problems of analysis, vol. 54 of LondonMathematical Society Lecture Note Series, Cambridge University Press, Cambridge-NewYork.[9]
W. W. Hager , Runge-Kutta discretizations of optimal control problems, in System theory:modeling, analysis and control (Cambridge, MA, 1999), vol. 518 of Kluwer Internat. Ser.Engrg. Comput. Sci., Kluwer Acad. Publ., Boston, MA, 2000, pp. 233–244.[10] , Runge-Kutta methods in optimal control and the transformed adjoint system, Numer.Math., 87 (2000), pp. 247–282.[11]
D. Y. Karamzin , Necessary conditions for the minimum in an impulsive optimal controlproblem, Sovrem. Mat. Prilozh., (2005), pp. 74–134.[12]
C. Y. Kaya , Inexact restoration for Runge-Kutta discretization of optimal controlproblems, SIAM J. Numer. Anal., 48 (2010), pp. 1492–1517.[13]
C. Y. Kaya and J. M. Mart´ınez , Euler discretization and inexact restoration for optimalcontrol, J. Optim. Theory Appl., 134 (2007), pp. 191–206.[14]
G. D. Maso and F. Rampazzo , On systems of ordinary differential equations withmeasures as controls, 4 (1991), pp. 739–765.[15]
M. Motta and C. Sartori , Generalized solutions to nonlinear stochastic differentialequations with vector-valued impulsive controls, Discrete Contin. Dyn. Syst., 29 (2011),pp. 595–613.[16]
F. L. Pereira and G. N. Silva , Necessary conditions of optimality for vector-valuedimpulsive control problems, Systems Control Lett., 40 (2000), pp. 205–215.[17]
E. Polak , On the use of consistent approximations in the solution of semi-infiniteoptimization and optimal control problems, Math. Programming, 62 (1993), pp. 385–414.[18]
E. Polak , Optimization, vol. 124 of Applied Mathematical Sciences, Springer-Verlag, NewYork, 1997. Algorithms and consistent approximations.[19]
W. Rudin , Functional analysis, International Series in Pure and Applied Mathematics,McGraw-Hill, Inc., New York, second ed., 1991.[20]
G. N. Silva, I. S. Litvinchev, M. Rojas-Medar, and A. J. V. Brand˜ao , Stateconstraints in optimal impulsive controls, Comput. Appl. Math., 19 (2000), pp. 179–206.2121]
G. N. Silva and J. D. L. Rowland , On the optimal impulsive control problem, Rev.Mat. Estat´ıst., 14 (1996), pp. 17–33.[22]
G. N. Silva and R. B. Vinter , Measure driven differential inclusions, J. Math. Anal.Appl., 202 (1996), pp. 727–746.[23] , Necessary conditions for optimal impulsive control problems, SIAM J. Control Op-tim., 35 (1997), pp. 1829–1846.[24]
R. Vinter , Optimal control, Modern Birkh¨auser Classics, Birkh¨auser Boston Inc., Boston,MA, 2010. Paperback reprint of the 2000 edition.[25]
P. R. Wolenski and S. ˇZabi´c , A differential solution concept for impulsive systems,vol. 13B, 2006.[26] , A sampling method and approximation results for impulsive systems, vol. 46, 2007.[27]