Featured Researches

Optimization And Control

Environmental contours and optimal design

Classical environmental contours are used in structural design in order to obtain upper bounds on the failure probabilities of a large class of designs. Buffered environmental contours serve the same purpose, but with respect to the so-called buffered failure probability. In contrast to classical environmental contours, buffered environmental contours do not just take into account failure vs. functioning, but also to which extent the system is failing. This is important to take into account whenever the consequences of failure are relevant. For instance, if we consider a power network, it is important to know not just that the power supply is failed, but how many consumers are affected by the failure. In this paper, we study the connections between environmental contours, both classical and buffered, and optimal structural design. We connect the classical environmental contours to the risk measure value-at-risk. Similarly, the buffered environmental contours are naturally connected to the convex risk measure conditional value-at-risk. We study the problem of minimizing the risk of the cost of building a particular design. This problem is studied both for value-at-risk and conditional-value-at-risk. By using the connection between value-at-risk and the classical environmental contours, we derive a representation of the design optimization problem expressed via the environmental contour. A similar representation is derived by using the connection between conditional value-at-risk and the buffered environmental contour. From these representations, we derive a sufficient condition which must hold for an optimal design. This is done both in the classical and the buffered case. Finally, we apply these results to solve a design optimization problem from structural reliability.

Read more
Optimization And Control

Exact Linear Convergence Rate Analysis for Low-Rank Symmetric Matrix Completion via Gradient Descent

Factorization-based gradient descent is a scalable and efficient algorithm for solving low-rank matrix completion. Recent progress in structured non-convex optimization has offered global convergence guarantees for gradient descent under certain statistical assumptions on the low-rank matrix and the sampling set. However, while the theory suggests gradient descent enjoys fast linear convergence to a global solution of the problem, the universal nature of the bounding technique prevents it from obtaining an accurate estimate of the rate of convergence. In this paper, we perform a local analysis of the exact linear convergence rate of gradient descent for factorization-based matrix completion for symmetric matrices. Without any additional assumptions on the underlying model, we identify the deterministic condition for local convergence of gradient descent, which only depends on the solution matrix and the sampling set. More crucially, our analysis provides a closed-form expression of the asymptotic rate of convergence that matches exactly with the linear convergence observed in practice. To the best of our knowledge, our result is the first one that offers the exact rate of convergence of gradient descent for matrix factorization in Euclidean space for matrix completion.

Read more
Optimization And Control

Exact Penalty Functions with Multidimensional Penalty Parameter and Adaptive Penalty Updates

We present a general theory of exact penalty functions with vectorial (multidimensional) penalty parameter for optimization problems in infinite dimensional spaces. In comparison with the scalar case, the use of vectorial penalty parameters provides much more flexibility, allows one to adaptively and independently take into account the violation of each constraint during an optimization process, and often leads to a better overall performance of an optimization method using an exact penalty function. We obtain sufficient conditions for the local and global exactness of penalty functions with vectorial penalty parameters and study convergence of global exact penalty methods with several different penalty updating strategies. In particular, we present a new algorithmic approach to an analysis of the global exactness of penalty functions, which contains a novel characterisation of the global exactness property in terms of behaviour of sequences generated by certain optimization methods.

Read more
Optimization And Control

Exact algorithms for budgeted prize-collecting covering subgraph problems

We introduce a class of budgeted prize-collecting covering subgraph problems. For an input graph with prizes on the vertices and costs on the edges, the aim of these problems is to find a connected subgraph such that the cost of its edges does not exceed a given budget and its collected prize is maximum. A vertex prize is collected when the vertex is visited, but the price can also be partially collected if the vertex is covered, where an unvisited vertex is covered by a visited one if the latter belongs to the former's neighbourhood. A capacity limit is imposed on the number of vertices that can be covered by the same visited vertex. Potential application areas include network design and intermodal transportation. We develop a branch-and-cut framework and a Benders decomposition for the exact solution of the problems in this class. We validate our algorithmic frameworks for the cases where the subgraph is a tour and a tree, and for these two cases we also identify novel symmetry-breaking inequalities.

Read more
Optimization And Control

Exact and Heuristic Methods with Warm-start for Embedded Mixed-Integer Quadratic Programming Based on Accelerated Dual Gradient Projection

Small-scale Mixed-Integer Quadratic Programming (MIQP) problems often arise in embedded control and estimation applications. Driven by the need for algorithmic simplicity to target computing platforms with limited memory and computing resources, this paper proposes a few approaches to solving MIQPs, either to optimality or suboptimally. We specialize an existing Accelerated Dual Gradient Projection (GPAD) algorithm to effectively solve the Quadratic Programming (QP) relaxation that arise during Branch and Bound (B&B) and propose a generic framework to warm-start the binary variables which reduces the number of QP relaxations. Moreover, in order to find an integer feasible combination of the binary variables upfront, two heuristic approaches are presented: ( i ) without using B&B, and ( ii ) using B&B with a significantly reduced number of QP relaxations. Both heuristic approaches return an integer feasible solution that may be suboptimal but involve a much reduced computation effort. Such a feasible solution can be either implemented directly or used to set an initial upper bound on the optimal cost in B&B. Through different hybrid control and estimation examples involving binary decision variables, we show that the performance of the proposed methods, although very simple to code, is comparable to that of state-of-the-art MIQP solvers.

Read more
Optimization And Control

Explicit continuation methods with L-BFGS updating formulas for linearly constrained optimization problems

This paper considers an explicit continuation method with the trusty time-stepping scheme and the limited-memory BFGS (L-BFGS) updating formula (Eptctr) for the linearly constrained optimization problem. At every iteration, Eptctr only involves three pairs of the inner product of vector and one matrix-vector product, other than the traditional and representative optimization method such as the sequential quadratic programming (SQP) or the latest continuation method such as Ptctr \cite{LLS2020}, which needs to solve a quadratic programming subproblem (SQP) or a linear system of equations (Ptctr). Thus, Eptctr can save much more computational time than SQP or Ptctr. Numerical results also show that the consumed time of EPtctr is about one tenth of that of Ptctr or one fifteenth to 0.4 percent of that of SQP. Furthermore, Eptctr can save the storage space of an (n+m)?(n+m) large-scale matrix, in comparison to SQP. The required memory of Eptctr is about one fifth of that of SQP. Finally, we also give the global convergence analysis of the new method under the standard assumptions.

Read more
Optimization And Control

Exponential Decay of Sensitivity in Graph-Structured Nonlinear Programs

We study solution sensitivity for nonlinear programs (NLPs) whose structure is induced by a graph G=(V,E) . These graph-structured NLPs arise in many applications such as dynamic optimization, stochastic optimization, optimization with partial differential equations, and network optimization. We show that the sensitivity of the primal-dual solution at node i?�V against a data perturbation at node j?�V is bounded by Υ ? d G (i,j) for constants Υ>0 and ???0,1) and where d G (i,j) is the distance between i and j on G . In other words, the sensitivity of the solution decays exponentially with the distance to the perturbation point. This result, which we call exponential decay of sensitivity (EDS), holds under fairly standard assumptions used in classical NLP sensitivity theory: the strong second-order sufficiency condition and the linear independence constraint qualification. We also present conditions under which the constants (Υ,?) remain uniformly bounded; this allows us to characterize behavior for NLPs defined over subgraphs of infinite graphs (e.g., as those arising in problems with unbounded domains). Our results provide new insights on how perturbations propagate through the NLP graph and on how the problem formulation influences such propagation. Specifically, we provide empirical evidence that positive objective curvature and constraint flexibility tend to dampen propagation. The developments are illustrated with numerical examples.

Read more
Optimization And Control

Exterior Point Method for Completely Positive Factorization

Completely positive factorization (CPF) is a critical task with applications in many fields. This paper proposes a novel method for the CPF. Based on the idea of exterior point iteration, an optimization model is given, which aims to orthogonally transform a symmetric lower rank factor to be nonnegative. The optimization problem can be solved via a modified nonlinear conjugate gradient method iteratively. The iteration points locate on the exterior of the orthonormal manifold and the closed set whose transformed matrices are nonnegative before convergence generally. Convergence analysis is given for the local or global optimum of the objective function, together with the iteration algorithm. Some potential issues that may affect the CPF are explored numerically. The exterior point method performs much better than other algorithms, not only in the efficiency of computational cost or accuracy, but also in the ability to address the CPF in some hard cases.

Read more
Optimization And Control

Extragradient and Extrapolation Methods with Generalized Bregman Distances for Saddle Point Problems

In this work, we introduce two algorithmic frameworks, named Bregman extragradient method and Bregman extrapolation method, for solving saddle point problems. The proposed frameworks not only include the well-known extragradient and optimistic gradient methods as special cases, but also generate new variants such as sparse extragradient and extrapolation methods. With the help of the recent concept of relative Lipschitzness and some Bregman distance related tools, we are able to show certain upper bounds in terms of Bregman distances for ``regret" measures. Further, we use those bounds to deduce the convergence rate of $\cO(1/k)$ for the Bregman extragradient and Bregman extrapolation methods applied to solving smooth convex-concave saddle point problems. Our theory recovers the main discovery made in [Mokhtari et al. (2020), SIAM J. Optim., 20, pp. 3230-3251] for more general algorithmic frameworks with weaker assumptions via a conceptually different approach.

Read more
Optimization And Control

Factor- 2 ????Acceleration of Accelerated Gradient Methods

The optimized gradient method (OGM) provides a factor- 2 ????speedup upon Nesterov's celebrated accelerated gradient method in the convex (but non-strongly convex) setup. However, this improved acceleration mechanism has not been well understood; prior analyses of OGM relied on a computer-assisted proof methodology, so the proofs were opaque for humans despite being verifiable and correct. In this work, we present a new analysis of OGM based on a Lyapunov function and linear coupling. These analyses are developed and presented without the assistance of computers and are understandable by humans. Furthermore, we generalize OGM's acceleration mechanism and obtain a factor- 2 ????speedup in other setups: acceleration with a simpler rational stepsize, the strongly convex setup, and the mirror descent setup.

Read more

Ready to get started?

Join us today