Interior Point Method for Dynamic Constrained Optimization in Continuous Time
Mahyar Fazlyab, Santiago Paternain, Victor M. Preciado, Alejandro Ribeiro
IInterior Point Method for Dynamic Constrained Optimization inContinuous Time
Mahyar Fazlyab, Santiago Paternain, Victor M. Preciado and Alejandro Ribeiro
Abstract — This paper considers a class of convex optimiza-tion problems where both, the objective function and theconstraints, have a continuously varying dependence on time.Our goal is to develop an algorithm to track the optimal solutionas it continuously changes over time inside or on the boundaryof the dynamic feasible set. We develop an interior point methodthat asymptotically succeeds in tracking this optimal pointin nonstationary settings. The method utilizes a time varyingconstraint slack and a prediction-correction structure that relieson time derivatives of functions and constraints and Newtonsteps in the spatial domain. Error free tracking is guaranteedunder customary assumptions on the optimization problemsand time differentiability of objective and constraints. Theeffectiveness of the method is illustrated in a problem thatinvolves multiple agents tracking multiple targets.
I. I
NTRODUCTION
In a conventional optimization problem we are given afixed objective and a fixed constraint and are tasked withfinding the optimal argument that minimizes the objectiveamong all feasible variables. In a time varying problem theobjective and constraints change continuously in time andwe are tasked with tracking the optimal point as it variesover time. These problems arise often in dynamical systemsand control because many practical situations involve anobjective function or a set of constraints that have depen-dence on time [1], [5], [8], [12]. Particular examples includeestimation of the path of a stochastic process [9], signaldetection with adaptive filters [4], tracking moving targets[20], and scheduling trajectories in an autonomous team ofrobots. [18]Methods to solve convex optimization problems – say,gradient descent, Newton’s method, and interior point – areiterative in nature [3], [6]. When applied to a time varyingnonstationary setting, each iteration moves the argumentcloser to the optimum while the optimum drifts away becauseof the changing nature of the objective and the constraints.This process is likely to settle into a steady state optimalitygap that depends on the relative time constants of thedynamical process and the optimization algorithm. That thisis indeed true has been observed and proven for gradientdescent in unconstrained optimization [13], as well as inconstrained optimization problems that arise in the specificcontexts of distributed robotics [19], sequential estimation [9]and distributed optimization with a time varying alternatingdirection method of multipliers [10].Alternatively, one can draw inspiration from theprediction-correction structure of Bayesian filters and utilizeknowledge of the system’s dynamics to predict the drift ofthe optimal operating point and utilize the descent step of an optimization algorithm to correct the prediction. Variations ofthis idea have been developed in discrete [17] and continuous[2] time. When used in discrete time, the addition of aprediction step has been shown to reduce the tracking errorrelative to verbatim use of a descent algorithm [16], [17].When used in continuous time, the use of a prediction stepand a Newton correction results in perfect tracking of theoptimal argument of an unconstrained optimization problem[2], [15].This paper develops an interior point method to trackthe optimal point of a convex time varying constrained optimization problem (Section II). Important characteristicsof this method are: (i) The use of a time time varyinglogarithmic barrier akin to the barrier used in static interiorpoint methods. (ii) The use of a time varying constraintslack that is decreased over time and guarantees asymptoticsatisfaction of the constraints. (iii) The use of time derivativesthat play the role of a prediction step that tries to follow themovement of the optimal argument. (iv) The use of spatialNewton decrements that play the role of a correction step bypushing towards the current optimum. The main contributionof this paper is to show that this method converges to thetime varying optimum under mild assumptions (Section III).These assumptions correspond to the customary requirementsto prove convergence of interior point methods and differen-tiability of the objective and constraints with respect to timevariations (Theorem 1). It is important to emphasize that ourconvergence result holds for nonstationary systems and, assuch, do not rely on a vanishing rate of change. This impliesthat the proposed systems succeeds in tracking the optimumwithout error after a transient phase. The effectiveness ofthe method is illustrated in a problem that involves multipleagents tracking multiple targets (Section IV).
Notation and Preliminaries.
Given an n-tuple ( x , ..., x n ) , x ∈ R n is the associated vector. We denote as I n the n-dimensional identity matrix, as S n the space of symmetricmatrices and as S n ++ and S n + the spaces of positive definiteand positive semidefinite matrices, respectively. For squarematrices A and B , we write A (cid:23) B if and only if A − B is positive semidefinite. The Euclidean norm of a vector x is (cid:107) x (cid:107) . The gradient of the function f ( x , t ) ∈ R withrespect to x ∈ R n is denoted by ∇ x f ( x , t ) ∈ R n . The partialderivatives of ∇ x f ( x , t ) with respect to x and t are denotedby ∇ xx f ( x , t ) ∈ S n and ∇ x t f ( x , t ) ∈ R n , respectively. a r X i v : . [ m a t h . O C ] O c t I. P
ROBLEM STATEMENT
Consider the following constrained convex optimizationproblem x (cid:63) : = arg min x ∈ R n f ( x ) (1)s.t. f i ( x ) ≤ , i ∈ { , · · · , p } . In order to solve (1), we can exploit interior point method[6], [14] in which we relax the constraints and penalize theirviolation by logarithmic functions of the form − log ( − f i ( x )) .More specifically, we solve the relaxed problem x (cid:63) ( c ) : = arg min x ∈ D Φ ( x , c ) , (2)where the so-called barrier function Φ ( x , c ) is defined as Φ ( x , c ) = f ( x ) − c p ∑ i = log ( − f i ( x )) , (3)and D = { x ∈ R n | f i ( x ) < , i = , · · · , p } is the interior of thefeasible domain. Furthermore, c is a positive constant suchthat x (cid:63) ( c ) → x (cid:63) as c → ∞ .To implement the interior point method, the unconstrainedoptimization problem (2) is solved sequentially for a positivegrowing sequence { c k } , each starting from the optimalsolution of the previous optimization problem. The resultingsequence { x (cid:63) ( c k ) } converges to the optimal point as c k → ∞ .For each fixed c k , x (cid:63) ( c k ) can be found, for instance, byNewton’s method as follows ddt x ( t ) = − ∇ − xx Φ ( x ( t ) , c k ) ∇ x Φ ( x ( t ) , c k ) . (4)with initial condition x ( ) = x (cid:63) ( c k − ) . In order to decreasethe total convergence time to the optimal point x (cid:63) ( t ) , itis appealing to increase c as a function of time, in lieuof discontinuous updates. Is this case, problem (3) wouldinvolve a time varying objective function.In this paper, we consider a more general case in whichboth the objective function and/or the constraints are time-varying. More formally, we consider the following problem x (cid:63) ( t ) : = arg min x ∈ R n f ( x , t ) (5)s.t. f i ( x , t ) ≤ , i ∈ { , · · · , p } . for all t ∈ [ , ∞ ) . The ultimate goal is generate a (notnecessarily feasible) solution x ( t ) such that x ( t ) → x (cid:63) ( t ) as t → ∞ , which necessarily enforces asymptotic feasibility. Thecorresponding time-varying barrier function of (5) becomes Φ ( x , t ) = f ( x , t ) − c ( t ) p ∑ i = log ( − f i ( x , t )) , x ∈ D ( t ) (6)where D ( t ) : = { x ∈ R n | f i ( x , t ) < , i = , · · · , p } is theinterior of the (time varying) feasible region, and c ( t ) is apositive valued function of time. For minimizing (6), we willpropose a time-varying Newton differential equation whosesolution x ( t ) converges to x (cid:63) ( t ) as t → ∞ .For further analysis, we assume that the objective functionis strongly convex in x and that the constraints are convexin x for all times t ∈ [ , ∞ ) . In addition, we assume that f i ( x , t ) is continuously differentiable with respect to time forall i ∈ { , , · · · , p } . We formalize these assumptions next. Assumption 1
The objective function f ( x , t ) and the con-straint functions f i ( x , t ) are twice continuously differentiablewith respect to x and continuously differentiable with respectto time for all t ≥ . Furthermore, f ( x , t ) is uniformlystrongly convex in x , i.e., ∇ xx f ( x , t ) (cid:23) m I for some m > and f ( x , t ) is convex with respect to x for all t ≥ . Assumption 2
Slater’s condition qualification holds forproblem (5) for all t ≥ , i.e., there exits x † ∈ R n such thatf i ( x † , t ) < for all t ≥ . The above assumptions make the convexity-related prop-erties to be invariant over time. With Assumption 2, thenecessary and sufficient condition for optimality of problem(5) at all times t ≥ ∇ x f ( x (cid:63) ( t ) , t ) + p ∑ i = λ (cid:63) i ( t ) ∇ x f i ( x (cid:63) ( t ) , t ) = , (7) λ (cid:63) i ( t ) f i ( x (cid:63) ( t ) , t ) = , i ∈ { , · · · , p } λ (cid:63) i ( t ) ≥ , i ∈ { , · · · , p } f i ( x (cid:63) ( t ) , t ) ≤ . where λ (cid:63) ( t ) = [ λ (cid:63) ( t ) , · · · , λ (cid:63) p ] T ∈ R p is the vector of optimaldual variables. Finally, we make a further assumption aboutthe time variations of the optimal primal-dual pair. Assumption 3
For any α > , the optimal dual variablessatisfy λ (cid:63) i ( t ) exp ( − α t ) → as t → ∞ for all i ∈ { , · · · , p } . The above assumption excludes the possibility for theoptimal dual variables (and hence the optimal primal vari-ables) to escape to infinity exponentially fast. Otherwise, theoptimality conditions would become ill-conditioned as t → ∞ and its solution is not tractable in the implementation phase.Equivalent assumtions to Assumption 3 are made in similarsettings. See e.g. [17].Prior to solving the general problem (5), we start offwith unconstrained dynamic convex optimization where theoptimization space is the domain of the objective function(Section III-A). In Section III-B we take time varying linearequality constraints into account and in Section III-C wedeal with the case of generic dynamic convex constrainedoptimization problems. In section IV two tracking problemsare studied that are posed as time varying optimization prob-lems that can be solved through the techniques developed inSection III.III. T IME -V ARYING I NTERIOR P OINT M ETHOD
In this section, we develop a time-varying interior pointmethod that solves (5). We first introduce time-varyingNewton method for unconstrained dynamic optimizationproblems. Next, we generalize the method to linear equalityconstraints. Finally, we incorporate time-varying interiorpoint method for the case of inequality constraints. . Unconstrained Time-Varying Convex optimization
In unconstrained time-varying convex optimization thegoal is to track, continuously in time, the minimizer ofa time-varying convex function. Mathematically speaking,given an objective function f ( x , t ) : R n × R + → R , the goalis estimate the trajectory x (cid:63) ( t ) where x (cid:63) ( t ) : = arg min x ∈ R n f ( x , t ) . (8)The optimal trajectory x (cid:63) ( t ) is characterized by the pointswhere the gradient of f ( x , t ) with respect to x is zero,i.e. ∇ x f ( x (cid:63) ( t ) , t ) ≡ t ∈ [ , ∞ ) . Using chain rule todifferentiate the latter identity with respect to time yields ddt ∇ x f ( x (cid:63) ( t ) , t ) = ∇ xx f ( x (cid:63) ( t ) , t ) ddt x (cid:63) ( t ) + ∇ x t f ( x (cid:63) ( t ) , t ) . (9)The left hand side of the above equation is identically zerofor t ∈ [ , ∞ ) . It follows that the optimal solution moves witha velocity given by ddt x (cid:63) ( t ) = − ∇ − xx f ( x (cid:63) ( t ) , t ) ∇ x t f ( x (cid:63) ( t ) , t ) . (10)The above observation suggests that the tracking trajectoryshould evolve with (approximately) the same velocity as theminimizer trajectory, while taking a descent direction at thesame time in order to get closer to the optimal trajectory.If Newton-method is chosen as the descent direction, theresulting time-varying Newton method takes the form˙ x ( t ) = − ∇ − xx f ( x ( t ) , t )[ P ∇ x f ( x ( t ) , t ) + ∇ x t f ( x ( t ) , t )] , (11)where P is a positive definite matrix. The next lemma showsthat the solution of the dynamical system (11) converges ex-ponentially to the solution to the unconstrained minimizationproblem (8). Lemma 1
Let x (cid:63) ( t ) be defined as in (8) and x ( t ) be thesolution of the differential equation (11) with P ∈ S n ++ satisfying P (cid:23) σ I . Then, the following inequality holds (cid:107) x ( t ) − x (cid:63) ( t ) (cid:107) ≤ C ( x , m ) e − σ t , where x is an arbitrary initial point and ≤ C ( x , m ) < ∞ .Proof: The proof is found in Appendix subsection A. The previous lemma confirms that the trajectory generatedby (11) converges exponentially to the optimal trajectoryand therefore it is possible to solve the unconstrained time-varying optimization problem (8) by discretization of thedynamical system (11). Next, we show that the same dy-namical system allows us to solve dynamic optimizationproblems with equality constraints by augmenting the statespace with the Lagrange multipliers associated with theequality constraints.
B. Equality-Constraint Time-Varying Convex Optimization
Consider the problem of tracking the minimizer of aconvex time-varying objective function subject to linearconstraints. More precisely, we seek to estimate x (cid:63) ( t ) forall t ∈ [ , ∞ ) where x (cid:63) ( t ) : = arg min x ∈ R n f ( x , t ) (12)s.t. A ( t ) x = b ( t ) , with A ( t ) ∈ R p × n and rank ( A ( t )) = p < n for all t ≥
0. TheLagrangian relaxation of (12) is L ( x , λ , t ) = f ( x , t ) + λ T ( A ( t ) x − b ( t )) . (13)where λ ∈ R p is the vector of Lagrange multipliers. Theoptimal trajectory ( x (cid:63) ( t ) , λ (cid:63) ( t )) constitutes of points wherethe gradient of the Lagrangian with respect to both x and λ vanishes. Therefore, by extending the state space to be z : =[ x T λ T ] T ∈ R n + p , the optimal solution z (cid:63) ( t ) is characterizedby ∇ z L ( z (cid:63) ( t ) , t ) = , t ≥ . (14)Similar to the unconstrained case, the tracking trajectoryshould compose of two directions: a velocity compensatingdirection and a descent direction. The next lemma addressesthe algorithm for tracking x (cid:63) ( t ) in (12). Lemma 2
Denote z ( t ) = [ x ( t ) T λ ( t ) T ] T as the solution ofthe following differential equationddt z ( t ) = − ∇ − zz L ( z ( t ) , t )( P ∇ z L ( z ( t ) , t ) + ∇ z t L ( z ( t ) , t )) (15) where L is defined in (13) and P ∈ S n + p is a positive definitematrix satisfying P (cid:23) σ I n + p . Then, the following inequalityholds (cid:107) x ( t ) − x (cid:63) ( t ) (cid:107) + (cid:107) λ ( t ) − λ (cid:63) ( t ) (cid:107) ≤ C ( x , λ , m ) e − σ t . (16) where x ∈ R n and λ ∈ R p are arbitrary initial points and ≤ C ( x , λ , m ) < ∞ .Proof: See Appendix subsection B.Notice that the dynamical system (15) needs not to startfrom a feasible point, i.e. it is not required that A x = b .However, feasibility is achieved exponentially fast by (16).In the sequel, we consider the most general case with time-varying inequality constraints. Without loss of generally,we omit linear equality constraints as each single affineequality constraint can be expressed as two convex inequalityconstraints. C. Time-Varying Interior-point method
In this section, we return to the general optimizationproblem (5) with the associated barrier function (6). We willshow that the same differential equation developed in SectionIII-A – using the barrier function Φ ( x , t ) defined in (6) inlieu of f ( x , t ) in (11) – pushes the generated solution tohe optimal trajectory when the barrier parameter c ( t ) → ∞ .Formally, the dynamical system of interest is˙ x ( t ) = − ∇ − xx Φ ( x ( t ) , t ) ( P ∇ x Φ ( x ( t ) , t ) + ∇ x t Φ ( x ( t ) , t )) , (17)where P ∈ S n ++ . Notice, however, that the Newton methodin this case needs to start from a strictly feasible point,i.e. x ∈ D . This limitation is not desirable, as it is notalways straightforward to find such an initial condition. Thisrestriction can be overcome by expanding the feasible regionat t = s , and shrink it tothe real feasible set over time. More precisely, we perturbproblem (5) at each time t ∈ [ , ∞ ) by s ( t ) : R + → R ++ asfollows˜ x (cid:63) ( t ) : = arg min x ∈ R n f ( x , t ) (18)s.t. f i ( x , t ) ≤ s ( t ) , i ∈ { , · · · , p } . It can be observed that the feasible region is enlarged for s ( t ) ≥
0. In particular, at time t =
0, any initial point x ∈ R n can be made feasible by choosing the initial s = s ( ) largeenough. Another consequence of such perturbation is that,the optimal value of the perturbed problem (18) is no largerthan the optimal value at x (cid:63) ( t ) . The next lemma formalizesthis observation. Lemma 3
Let x (cid:63) ( t ) be defined as in (5) and ˜ x (cid:63) ( t ) as in (18) .Then, the following inequality holds: ≤ f ( x (cid:63) ( t ) , t ) − f ( ˜ x (cid:63) ( t ) , t ) ≤ p ∑ i = λ (cid:63) i ( t ) s ( t ) (19) where λ (cid:63) i ( t ) , i ∈ { , · · · , p } are optimal dual variables de-fined in (7) .Proof: See Appendix subsection C. The above lemma asserts that the sub-optimality of the per-turbed solution ˜ x (cid:63) ( t ) is controlled by s ( t ) . More importantly,as a result of Assumption 3, the sub-optimality can be pushedto zero if s ( t ) = s exp ( − α t ) for any α > Φ ( x , t ) = f ( x , t ) − c ( t ) p ∑ i = log ( s ( t ) − f i ( x , t )) , x ∈ ˜ D ( t ) (20)where ˜ D ( t ) : = { x ∈ R n | f i ( x , t ) < s ( t ) , i = , · · · , p } is theperturbed domain. For any initial point x ∈ R n , s can bechosen large enough such that x ∈ ˜ D . More precisely, wemust have that s = (cid:40) i f i ( x , ) ≤ i f i ( x , ) + ε if max i f i ( x , ) > ε >
0. Denote by ˜ z (cid:63) ( t ) as the minimizer of theperturbed barrier function (20), i.e.˜ z (cid:63) ( t ) : = arg min x ∈ ˜ D ( t ) ˜ Φ ( x , t ) (22) For solving (22), we can now apply time-varying Newtonmethod to get the following differential equation˙˜ z ( t ) = − ∇ − xx ˜ Φ ( ˜ z ( t ) , t ) (cid:0) P ∇ x ˜ Φ ( ˜ z ( t ) , t ) + ∇ x t ˜ Φ ( ˜ z ( t ) , t ) (cid:1) , (23)By increasing the barrier parameter c ( t ) over time the sub-optimality is decreased as we show in the next lemma. Lemma 4
With ˜ z (cid:63) ( t ) defined as in (22) and ˜ x (cid:63) ( t ) as in (18) ,the following inequality holds for all t ≥ ≤ f ( ˜ z (cid:63) ( t ) , t ) − f ( ˜ x (cid:63) ( t ) , t ) ≤ pc ( t ) (24) Proof:
The proof is provided in Appendix subsectionD.The bound (24) quantifies the sub-optimality of ˜ z (cid:63) ( t ) withrespect to the perturbed optimal trajectory ˜ x (cid:63) ( t ) . Morespecifically, as c ( t ) → ∞ , ˜ z (cid:63) ( t ) → ˜ x (cid:63) ( t ) . The next lemmaestablished this convergence under the dynamics (23). Lemma 5
Consider problem (18) with the correspondingtime-varying barrier function (20) . Let ˜ z ( t ) be the solution ofthe differential equation (23) with P ∈ S n ++ satisfying P (cid:23) σ I ,and initial condition x ∈ R n . Finally, let s be chosen as in (21) . Then, the following inequality holds (cid:107) ˜ z ( t ) − ˜ z (cid:63) ( t ) (cid:107) ≤ C ( x , c , s , m ) e − σ t (25) where ≤ C ( x , c , s , m ) < ∞ .Proof: The proof is provided in Appendix subsection E. Few comments are in order: First, Lemma 5 shows thatthe solution ˜ z ( t ) of the dynamical system (23) convergesexponentially to the sub-optimal solution ˜ z (cid:63) ( t ) in (22). Sec-ond, Lemma 4 guarantees that ˜ z (cid:63) ( t ) converges to the optimalperturbed solution ˜ x (cid:63) ( t ) in (18) if c ( t ) → ∞ . Finally, Lemma3 confirms that the perturbed solution ˜ x (cid:63) ( t ) converges to x (cid:63) ( t ) of the original problem (5) if s ( t ) →
0. Therefore,convergence of the dynamical system (23) to the desiredoptimal solution x (cid:63) ( t ) is guaranteed if lim t → ∞ c ( t ) = ∞ and s ( t ) → t → ∞ . The next theorem summarizes theseobservations as the main result of this section. Theorem 1
Consider problem (5) with optimal trajectory x (cid:63) ( t ) and the corresponding barrier function (20) . Let ˜ z ( t ) bethe solution of (23) with arbitrary initial condition x ∈ R n .Finally, let c ( t ) → ∞ and s ( t ) = s exp ( − α t ) for some α > and s chosen according to (21) . Then, ˜ z ( t ) → x (cid:63) ( t ) as t → ∞ .Proof: The proof follows from inequalities (19), (24)and (25).We close this section by two remarks.
Remark 1
The logarithmic barrier coefficient c ( t ) is re-quired to be positive, monotonic increasing, asymptoticallyconverging to infinity, and be bounded in finite time. Aconvenient choice could be c ( t ) = c exp ( α t ) for α > . ( m ) y ( m ) Target 1Target 2Agent
Fig. 1: Trajectory of the agent –in red– and the targets. Theagent starts off with the first target and then switches to thesecond one. The gain matrix is set to be P = I . t ( s ) e rr o r ( m ) max k _ x k = 20 : ms ! max k _ x k = 15 : ms ! max k _ x k = 10 : ms ! Fig. 2: Tracking error (cid:107) x ( t ) − x ∗ ( t ) (cid:107) as a function of time forthe unconstrained problem. The optimal solution x (cid:63) ( t ) hasbeen computed in discrete times of Euler integration usingCVX [7]. Notice that the term ∇ x t Φ ( ˜ z ( t ) , t ) in (23) compensates forcontinuous-time variation of both c ( t ) and s ( t ) . Remark 2
If the Newton differential equation (23) startsfrom a strictly feasible initial point, i.e. x ∈ D , then s ( t ) ischosen to be identically zero. From inequality (19) , ˜ x ( t ) = x ( t ) for t ≥ . Therefore, according to inequalities (24) and (25) , exponential convergence of the tracking trajectory ˜ z ( t ) to the optimal trajectory x (cid:63) ( t ) is guaranteed if the barrierparameter c ( t ) grows exponentially. IV. N
UMERICAL EXPERIMENTS
In this section, we evaluate the performance of the timevarying interior point method developed in the previoussection by two numerical examples. In Section IV-A we numerically solve an unconstrained dynamic convex problemand in Section IV-B we study a constrained dynamic con-vex problem. In implementations, we use Euler integrationscheme with variable step size to solve the differentialequations that generate the solution.
A. Unconstrained optimization
Consider an agent charged with the task of tracking twotargets sequentially. That is, the agent is required to track thefirst target on the time interval [ t , t int ] and track the othertarget on the time interval [ t int , t f ] . If we denote the positionof the i th target by y i ( t ) , the objective function takes theform f ( x , t ) = S ( t ) (cid:107) x − y ( t ) (cid:107) + ( − S ( t )) (cid:107) x − y ( t ) (cid:107) , (26)where S ( t ) is a weighting function that determines which tar-get must be tracked. We consider the following differentiableswitch S ( t ) = − + e − γ ( t − t int ) , (27)where γ > S ( t ) transitionsfrom one to zero.We use a time parametric representation of the trajectoriesof the targets. Specifically, let p j ( t ) be elements of a poly-nomial basis, the k th component of the trajectory of target i th is given by y ik ( t ) = n i − ∑ j = y ik j p j ( t ) , (28)where n i is the total number of polynomials that parametrizethe path traversed by target i and y ik , j represent the corre-sponding n i coefficients. To determine the coefficients y ik , j we draw at random a total of L random points per target { ˜ y i (cid:96) } L (cid:96) = independently and uniformly in the unit box [ , ] .Target i is required to pass trough the points ˜ y i (cid:96) at times (cid:96) t f / ( L + ) . Paths y i ( t ) are then chosen such that the pathintegral of the acceleration squared is minimized subject tothe constraints of each individual path, i.e; y i = arg min (cid:90) T (cid:107) ¨ y i ( t ) (cid:107) dt , s . t y i ( (cid:96) t f / ( L + )) = ˜ y i (cid:96) for all (cid:96) = , . . . L + . (29)This problem can be solved by a quadratic program [11].In subsequent numerical experiment, we set the numberof targets to m =
2, the time interval to [ , ] , and theintermediate switching time t int = /
2. We use the standardpolynomial basis p j ( t ) = t j in (28) and the degree of thepolynomials is set to be n =
30 for i = , · · · , m . To generatethe target paths, we consider a total of L = γ =
20. For this data, we solve (11) byEuler integration with variable step size of maximum length0 . s .The resulting trajectories are illustrated in Figure 1 whenwe select the gain matrix in (11) to be P = I . A qualitative ( m ) y ( m ) Target 1Target 2Agent1Agent2
Fig. 3: Trajectory of the agent –in red– and the targets totrack. It can be observed that the agents succeed in followingboth targets while keeping the distance between them assmall as possible. The gain matrix is set to be P = I .examination of this behavior shows that the agent –in red –succeeds in tracking the first target up to time t = . s andswitching to the second agent after t = . s . The objectivefunction considered in (26) agrees with Assumption 1 andtherefore, the hypothesis of Lemma 1 is satisfied. Conse-quently, exponential convergence to the optimal solution isguaranteed as it can be observed in Figure 2. Also, thisconvergence gets faster as the gain matrix P is increased. B. Constrained optimization
We consider two agents charged with the task of stayingwithing certain distance of two moving targets, while keepingtheir Euclidean distance as small as possible. Since the posi-tion of both agents are optimization variables, the objectivefunction in this problem is not time varying. However, theconstraints are. Denote by x i ( t ) the position vector of agent i and denote by y j ( t ) as the position vector of target j . Then,the agents aim to solve the following problemmin x ∈ R , x ∈ R (cid:107) x − x (cid:107) s.t (cid:107) x i − y i ( t ) (cid:107) − r i ≤ i = , . (30)The trajectories for the targets were computed by the sameprocedure as in Section IV-A. The maximum allowabledistance to the targets is set to be r i = . m . The barrierparameter is chosen as c ( t ) = e γ t and the slack parameter as s ( t ) = e − α t with γ = α =
10. For this data, we solvethe differential equation (23) by Euler integration schemewith variable step of maximum length 0 . s .In Figure 3 the resulting trajectories are depicted. Bothagents succeed in following the corresponding target, whilekeeping their distance small. Figure 4a illustrates the timeevolution of the constraint functions. The value of bothconstraints converges to zero asymptotically as expected ac-cording to Theorem 1. It is also expected that the solution of t ( s ) C o n s t r a i n t v i o l a t i o n ( m ) -0.0500.050.10.150.20.250.3 k x ( t ) ! y ( t ) k ! r k x ( t ) ! y ( t ) k ! r (a) Time evolution of constraint functions (cid:107) x i ( t ) − y i ( t ) (cid:107) − r i for i = , t(s) k x ( t ) ! x ? ( t ) k ( m ) (b) Time evolution of (cid:13)(cid:13) [ x ( t ) − x (cid:63) ( x ) , x ( t ) − x (cid:63) ( t )] (cid:13)(cid:13) where x (cid:63) ( t ) and x (cid:63) ( t ) are the optimal trajectories. Fig. 4: Evolution of the constraint functions and the trackingerror as a function of time. Both the constraint violation andthe tracking error converges asymptotically to zero as perTheorem 1.the dynamical system (23) converges to the optimal solutionasymptotically. This convergence is depicted in Figure 4b.V. C
ONCLUSIONS
In this paper, we developed an interior point frameworkfor solving convex optimization problems with time varyingobjective function and/or time varying constraints. We usedbarrier penalty functions to relax the constraints and devel-oped a prediction-correction Newton method for solving thecorresponding time varying unconstrained problem. Underreasonable assumptions, asymptotic convergence to the op-timal solution of the original problem was guaranteed. Alltime dependences were assumed to be continuous. Numericalexamples regarding target tracking applications were consid-ered to illustrate the performance of the developed methods.The numerical results were in accordance with the theoreticalndings. A
PPENDIX
A. Proof of Lemma 1
By Assumption 1(strong convexity), ∇ xx f ( x ( t ) , t ) − isdefined for all t ≥
0. The time variation of the gradient at x ( t ) can be written as ddt ∇ x f ( x ( t ) , t ) = ∇ xx f ( x ( t ) , t ) ˙ x ( t ) + ∇ x t f ( x ( t ) , t ) . (31)Substituting ˙ x ( t ) in (11), it follows that ddt ∇ x f ( x ( t ) , t ) = − P ∇ x f ( x ( t ) , t ) . (32)This is a first order linear differential equation on ∇ x f ( x ( t ) , t ) . Therefore, the solution of (32) is ∇ x f ( x ( t ) , t ) = e − Pt ∇ x f ( x , ) . (33)The norm of ∇ x f ( x ( t ) , t ) can be upper bounded usingCauchy-Schwartz inequality (cid:107) ∇ x f ( x ( t ) , t ) (cid:107) ≤ (cid:107) e − Pt (cid:107) (cid:107) ∇ x f ( x , ) (cid:107) . (34)Using the fact that σ I n (cid:22) P , we have that (cid:107) e − P t (cid:107) = e − σ t ,and therefore (cid:107) ∇ x f ( x ( t ) , t ) (cid:107) ≤ e − σ t (cid:107) ∇ x f ( x , ) (cid:107) . (35)On the other hand, (cid:107) x ( t ) − x ∗ ( t ) (cid:107) is bounded above by2 (cid:107) ∇ x f ( x ( t ) , t ) (cid:107) / m due to strong convexity. Hence, it fol-lows that (cid:107) x ( t ) − x (cid:63) ( t ) (cid:107) ≤ m (cid:107) ∇ x f ( x , ) (cid:107) e − σ t . (36)The above inequality completes the proof of the lemma. No-tice that in fact C ( x , m ) = (cid:107) ∇ x f ( x , ) (cid:107) / m is a boundednonnegative constant for all initial condition x . B. Proof of Lemma 2
The Hessian of the Lagrangian (13) with respect to z =[ x T , λ T ] T is given by ∇ zz L ( z ( t ) , t ) = (cid:20) ∇ xx f ( x ( t ) , t ) A ( t ) T A ( t ) p × p (cid:21) (37)Strong convexity of f ( x , t ) is sufficient for ∇ zz L ( z , t ) to beinvertible. The rest of the proof follows the same steps as theproof of Lemma 1 by substitutions ∇ x f ( z , t ) ← ∇ z L ( z , t ) for the objective function and x ← z for the optimizationvariables. C. Proof of Lemma 3
The proof is taken from [3]. Recall that the dual functionassociated with problem (5) is given by g ( λ ( t ) , t ) = min x ∈ R n f ( x , t ) + p ∑ i = λ i ( t ) f i ( x , t ) , (38)which is concave in λ = [ λ , · · · , λ p ] T ∈ R p + . At the optimalpoint ( x (cid:63) ( t ) , λ (cid:63) ( t )) , it follows by strong duality that f ( x (cid:63) ( t ) , t ) = g ( λ (cid:63) ( t ) , t ) (39) On the other hand, for any feasible point x of the perturbedproblem, i.e. x ∈ D t = { x ∈ R n | f i ( x , t ) ≤ s ( t ) } , it must betrue that g ( λ (cid:63) ( t ) , t ) ≤ f ( x , t ) + p ∑ i = λ (cid:63) i ( t ) f i ( x , t ) ≤ f ( x , t ) + p ∑ i = λ (cid:63) i ( t ) s ( t ) (40)Substituting (39) back in (40) and setting x = ˜ x (cid:63) ( t ) ∈ D t yields f ( x (cid:63) ( t ) , t ) ≤ f ( ˜ x (cid:63) ( t ) , t ) + p ∑ i = λ (cid:63) i ( t ) s ( t ) (41)It remains to prove that f ( ˜ x (cid:63) ( t ) , t ) ≤ f ( x (cid:63) ( t ) , t ) . Thisfollows from the fact that for 0 ≤ s , the feasible set isenlarged, causing reduction in the optimal value. The proofis complete. D. Proof of Lemma 4
The optimal trajectory ˜ z (cid:63) ( t ) solution to the problem (22)is characterized by ∇ x Φ ( ˜ z (cid:63) ( t ) , t ) = t ∈ [ , ∞ ) , orequivalently ∇ x f ( ˜ z (cid:63) ( t ) , t ) + c ( t ) p ∑ i = s ( t ) − f i ( ˜ z (cid:63) ( t ) , t ) ∇ f i ( ˜ z (cid:63) ( t ) , t ) = , (42)Define the following functions˜ λ (cid:63) i ( t ) = c ( t ) s ( t ) − f i ( ˜ z (cid:63) ( t ) , t ) , i = , · · · , p . (43)Notice that ˜ λ (cid:63) i ( t ) > t ≥ i ∈ { , . . . p } sincethe optimal point is always in the domain, i.e. f i ( ˜ z (cid:63) ( t ) , t ) < s ( t ) , i ∈ { , . . . , p } . On the other hand, the dual function ofthe perturbed problem (18) is given by˜ g ( λ ( t ) , t ) = min x ∈ R n f ( x , t ) + p ∑ i = λ i ( t )( f i ( x , t ) − s ( t )) , (44)It follows from (42) that the pair ( ˜ z (cid:63) ( t ) , ˜ λ (cid:63) ( t )) satisfies theidentity (44). Therefore, we have that˜ g ( ˜ λ (cid:63) ( t ) , t ) = f ( ˜ z (cid:63) ( t ) , t ) + p ∑ i = c ( t ) f i ( ˜ z (cid:63) ( t ) , t ) − s ( t ) s ( t ) − f i ( ˜ z (cid:63) ( t ) , t )= f ( ˜ z (cid:63) ( t ) , t ) − pc ( t ) (45)On the other hand, the dual function provides a lower boundfor the optimal solution (see e.g. [3])˜ g ( λ , t ) ≤ f ( ˜ x (cid:63) ( t ) , t ) , ∀ λ ∈ R p + , ∀ t ≥ . (46)In particular, the above inequality holds for ˜ λ (cid:63) ( t ) . Substitut-ing (45) in (46) results in f ( ˜ z (cid:63) ( t ) , t ) − pc ( t ) ≤ f ( ˜ x (cid:63) ( t ) , t ) , (47)which is the desired bound (24). . Proof of Lemma 5 The Hessian of ˜ Φ ( x , t ) can be written as ∇ xx ˜ Φ = ∇ xx f + c m ∑ i = ∇ x f i ∇ x f iT ( s − f i ) + ∇ xx f i s − f i (48)Since f ( x , t ) is strongly convex, and c ( t ) is strictly positive,it follows that ∇ xx ˜ Φ is m -strongly convex for all t ∈ [ , ∞ ) .The dynamics of ∇ x ˜ Φ at point ( ˜ z ( t ) , t ) can be written as ddt ∇ x ˜ Φ ( ˜ z ( t ) , t ) = ∇ xx ˜ Φ ( ˜ z ( t ) , t ) ˙˜ z ( t ) + ∇ x t ˜ Φ ( ˜ z ( t ) , t ) . (49)Substituting the dynamics (23) into the last result admits ddt ∇ x ˜ Φ ( ˜ z ( t ) , t ) = − P ∇ x ˜ Φ ( ˜ z ( t ) , t ) . (50)This implies in turn that (cid:107) ∇ x ˜ Φ ( ˜ z ( t ) , t ) (cid:107) ≤ e − σ t (cid:107) ∇ x ˜ Φ ( x , ) (cid:107) (51)where (cid:13)(cid:13) ∇ x ˜ Φ ( x , ) (cid:13)(cid:13) = (cid:107) ∇ x f ( x , ) + c p ∑ i = ∇ x f i ( x , ) s − f i ( x , ) (cid:107) < ∞ . (52)On the other hand, at each time t , ˜ Φ ( x , t ) is m -stronglyconvex. Therefore, it follows that (cid:107) ˜ z ( t ) − ˜ x (cid:63) ( t ) (cid:107) ≤ m (cid:107) ∇ x ˜ Φ ( ˜ z ( t ) , t ) (cid:107) , (53)Substituting (51) in (53) gives the desired inequality (cid:107) ˜ z ( t ) − ˜ x (cid:63) ( t ) (cid:107) ≤ m (cid:107) ∇ x ˜ Φ ( x , ) (cid:107) e − σ t . (54)The proof is complete. R EFERENCES[1] K. J. ˚Astr¨om and B. Wittenmark,
Adaptive control . Courier Corpo-ration, 2013.[2] M. Baumann, C. Lageman, and U. Helmke, “Newton-type algorithmsfor time-varying pose estimation,” in
Intelligent Sensors, Sensor Net-works and Information Processing Conference, 2004. Proceedings ofthe 2004 . IEEE, 2004, pp. 155–160.[3] S. Boyd and L. Vandenberghe,
Convex optimization . Cambridgeuniversity press, 2004.[4] R. L. Cavalcante and S. Stanczak, “A distributed subgradient methodfor dynamic convex optimization problems under noisy informationexchange,”
Selected Topics in Signal Processing, IEEE Journal of ,vol. 7, no. 2, pp. 243–256, 2013.[5] M. Fazlyab, M. Z. Pedram, H. Salarieh, and A. Alasty, “Parameterestimation and interval type-2 fuzzy sliding mode control of a z-axismems gyroscope,”
ISA transactions , vol. 52, no. 6, pp. 900–911, 2013.[6] A. V. Fiacco and G. P. McCormick,
Nonlinear programming: sequen-tial unconstrained minimization techniques . Siam, 1990, vol. 4.[7] M. Grant, S. Boyd, and Y. Ye, “Cvx: Matlab software for disciplinedconvex programming,” 2008.[8] A. Hofleitner, L. El Ghaoui, and A. Bayen, “Online least-squaresestimation of time varying systems with sparse temporal evolution andapplication to traffic estimation,” in
Decision and Control and Euro-pean Control Conference (CDC-ECC), 2011 50th IEEE Conferenceon . IEEE, 2011, pp. 2595–2601.[9] F. Y. Jakubiec and A. Ribeiro, “D-map: Distributed maximum a pos-teriori probability estimation of dynamic systems,”
Signal Processing,IEEE Transactions on , vol. 61, no. 2, pp. 450–466, 2013.[10] Q. Ling and A. Ribeiro, “Decentralized dynamic optimization throughthe alternating direction method of multipliers,” in
Signal ProcessingAdvances in Wireless Communications (SPAWC), 2013 IEEE 14thWorkshop on . IEEE, 2013, pp. 170–174. [11] D. Mellinger and V. Kumar, “Minimum snap trajectory generationand control for quadrotors,” in
Robotics and Automation (ICRA), 2011IEEE International Conference on . IEEE, 2011, pp. 2520–2525.[12] O. Nelles,
Nonlinear system identification: from classical approachesto neural networks and fuzzy models . Springer Science & BusinessMedia, 2001.[13] A. Y. Popkov, “Gradient Methods for Nonstationary UnconstrainedOptimization Problems,”
Automation and Remote Control , vol. 66,no. 6, pp. 883–891, Jun. 2005. [Online]. Available: http://link.springer.com/10.1007/s10513-005-0132-z[14] F. A. Potra and S. J. Wright, “Interior-point methods,”
Journal ofComputational and Applied Mathematics , vol. 124, no. 1, pp. 281–302, 2000.[15] S. Rahili and W. Ren, “Distributed convex optimization for continuous-time dynamics with time-varying cost functions,” arXiv preprintarXiv:1507.04878 , 2015.[16] A. Simonetto, A. Koppel, A. Mokhtari, G. Leus, and A. Ribeiro,“Prediction-correction methods for time-varying convex optimization,”in
Proceedings of the Asilomar Conference on Signals, Systems, andComputers (submitted),(Pacific Grove, USA) , 2015.[17] A. Simonetto, A. Mokhtari, A. Koppel, G. Leus, and A. Ribeiro,“A class of prediction-correction methods for time-varying convexoptimization,” arXiv preprint arXiv:1509.05196 , 2015.[18] S.-Y. Tu and A. H. Sayed, “Mobile adaptive networks,”
Selected Topicsin Signal Processing, IEEE Journal of , vol. 5, no. 4, pp. 649–664,2011.[19] M. M. Zavlanos, A. Ribeiro, and G. J. Pappas, “Network integrity inmobile robotic networks,”
Automatic Control, IEEE Transactions on ,vol. 58, no. 1, pp. 3–18, 2013.[20] K. Zhou, S. Roumeliotis et al. , “Multirobot active target tracking withcombinations of relative observations,”