Featured Researches

Optimization And Control

Convex Synthesis of Accelerated Gradient Algorithms

We present a convex solution for the design of generalized accelerated gradient algorithms for strongly convex objective functions with Lipschitz continuous gradients. We utilize integral quadratic constraints and the Youla parameterization from robust control theory to formulate a solution of the algorithm design problem as a convex semi-definite program. We establish explicit formulas for the optimal convergence rates and extend the proposed synthesis solution to extremum control problems.

Read more
Optimization And Control

Copositive Duality for Discrete Markets and Games

Optimization problems with discrete decisions are nonconvex and thus lack strong duality, which limits the usefulness of tools such as shadow prices and the KKT conditions. It was shown in Burer(2009) that mixed-binary quadratic programs can be written as completely positive programs, which are convex. Completely positive reformulations of discrete optimization problems therefore have strong duality if a constraint qualification is satisfied. We apply this perspective in two ways. First, we write unit commitment in power systems as a completely positive program, and use the dual copositive program to design a new pricing mechanism. Second, we reformulate integer programming games in terms of completely positive programming, and use the KKT conditions to solve for pure strategy Nash equilibria. To facilitate implementation, we also design a cutting plane algorithm for solving copositive programs exactly.

Read more
Optimization And Control

Core Imaging Library -- Part I: a versatile Python framework for tomographic imaging

We present the Core Imaging Library (CIL), an open-source Python framework for tomographic imaging with particular emphasis on reconstruction of challenging datasets. Conventional filtered back-projection reconstruction tends to be insufficient for highly noisy, incomplete, non-standard or multi-channel data arising for example in dynamic, spectral and in situ tomography. CIL provides an extensive modular optimisation framework for prototyping reconstruction methods including sparsity and total variation regularisation, as well as tools for loading, preprocessing and visualising tomographic data. The capabilities of CIL are demonstrated on a synchrotron example dataset and three challenging cases spanning golden-ratio neutron tomography, cone-beam X-ray laminography and positron emission tomography.

Read more
Optimization And Control

Correlated Bandits for Dynamic Pricing via the ARC algorithm

The Asymptotic Randomised Control (ARC) algorithm provides a rigorous approximation to the optimal strategy for a wide class of Bayesian bandits, while retaining reasonable computational complexity. In particular, it allows a decision maker to observe signals in addition to their rewards, to incorporate correlations between the outcomes of different choices, and to have nontrivial dynamics for their estimates. The algorithm is guaranteed to asymptotically optimise the expected discounted payoff, with error depending on the initial uncertainty of the bandit. In this paper, we consider a batched bandit problem where observations arrive from a generalised linear model; we extend the ARC algorithm to this setting. We apply this to a classic dynamic pricing problem based on a Bayesian hierarchical model and demonstrate that the ARC algorithm outperforms alternative approaches.

Read more
Optimization And Control

Cyclic Coordinate Dual Averaging with Extrapolation for Generalized Variational Inequalities

We propose the \emph{Cyclic cOordinate Dual avEraging with extRapolation (CODER)} method for generalized variational inequality problems. Such problems are fairly general and include composite convex minimization and min-max optimization as special cases. CODER is the first cyclic block coordinate method whose convergence rate is independent of the number of blocks (under a suitable Lipschitz definition), which fills the significant gap between cyclic coordinate methods and randomized ones that remained open for many years. Moreover, CODER provides the first theoretical guarantee for cyclic coordinate methods in solving generalized variational inequality problems under only monotonicity and Lipschitz continuity assumptions. To remove the dependence on the number of blocks, the analysis of CODER is based on a novel Lipschitz condition with respect to a Mahalanobis norm rather than the commonly used coordinate-wise Lipschitz condition; to be applicable to general variational inequalities, CODER leverages an extrapolation strategy inspired by the recent developments in primal-dual methods. Our theoretical results are complemented by numerical experiments, which demonstrate competitive performance of CODER compared to other coordinate methods.

Read more
Optimization And Control

Cycling problems in linear programming

This paper provides a set of cycling problems in linear programming. These problems should be useful for researchers to develop and test new simplex algorithms. As matter of the fact, this set of problems is used to test a recently proposed double pivot simplex algorithm for linear programming.

Read more
Optimization And Control

DC Semidefinite Programming and Cone Constrained DC Optimization

In the first part of this paper we discuss possible extensions of the main ideas and results of constrained DC optimization to the case of nonlinear semidefinite programming problems (i.e. problems with matrix constraints). To this end, we analyse two different approaches to the definition of DC matrix-valued functions (namely, order-theoretic and componentwise), study some properties of convex and DC matrix-valued functions and demonstrate how to compute DC decompositions of some nonlinear semidefinite constraints appearing in applications. We also compute a DC decomposition of the maximal eigenvalue of a DC matrix-valued function, which can be used to reformulate DC semidefinite constraints as DC inequality constrains. In the second part of the paper, we develop a general theory of cone constrained DC optimization problems. Namely, we obtain local optimality conditions for such problems and study an extension of the DC algorithm (the convex-concave procedure) to the case of general cone constrained DC optimization problems. We analyse a global convergence of this method and present a detailed study of a version of the DCA utilising exact penalty functions. In particular, we provide two types of sufficient conditions for the convergence of this method to a feasible and critical point of a cone constrained DC optimization problem from an infeasible starting point.

Read more
Optimization And Control

Data Privacy in Bid-Price Control for Network Revenue Management

We study a network revenue management problem where multiple parties agree to share some of the capacities of the network. This collaboration is performed by constructing a large mathematical programming model available to all parties. The parties then use the solution of this model in their own bid-price control systems. In this setting, the major concern for the parties is the privacy of their input data and the optimal solutions containing their individual decisions. To address this concern, we propose an approach based on solving an alternative data-private model constructed with input masking via random transformations. Our main result shows that each party can safely recover only its own optimal decisions after the same data-private model is solved by all the parties. We also discuss the security of the transformed problem and consider several special cases where possible privacy leakage would require attention. Observing that the dense data-private model may take more time to solve than the sparse original non-private model, we further propose a modeling approach that introduces sparsity into the data-private model. Finally, we conduct simulation experiments to support our results.

Read more
Optimization And Control

Decentralized Distributed Optimization for Saddle Point Problems

We consider distributed convex-concave saddle point problems over arbitrary connected undirected networks and propose a decentralized distributed algorithm for their solution. The local functions distributed across the nodes are assumed to have global and local groups of variables. For the proposed algorithm we prove non-asymptotic convergence rate estimates with explicit dependence on the network characteristics. To supplement the convergence rate analysis, we propose lower bounds for strongly-convex-strongly-concave and convex-concave saddle-point problems over arbitrary connected undirected networks. We illustrate the considered problem setting by a particular application to distributed calculation of non-regularized Wasserstein barycenters.

Read more
Optimization And Control

Decentralized Riemannian Gradient Descent on the Stiefel Manifold

We consider a distributed non-convex optimization where a network of agents aims at minimizing a global function over the Stiefel manifold. The global function is represented as a finite sum of smooth local functions, where each local function is associated with one agent and agents communicate with each other over an undirected connected graph. The problem is non-convex as local functions are possibly non-convex (but smooth) and the Steifel manifold is a non-convex set. We present a decentralized Riemannian stochastic gradient method (DRSGD) with the convergence rate of O(1/ K ??????) to a stationary point. To have exact convergence with constant stepsize, we also propose a decentralized Riemannian gradient tracking algorithm (DRGTA) with the convergence rate of O(1/K) to a stationary point. We use multi-step consensus to preserve the iteration in the local (consensus) region. DRGTA is the first decentralized algorithm with exact convergence for distributed optimization on Stiefel manifold.

Read more

Ready to get started?

Join us today