Featured Researches

Optimization And Control

Direct-Search for a Class of Stochastic Min-Max Problems

Recent applications in machine learning have renewed the interest of the community in min-max optimization problems. While gradient-based optimization methods are widely used to solve such problems, there are however many scenarios where these techniques are not well-suited, or even not applicable when the gradient is not accessible. We investigate the use of direct-search methods that belong to a class of derivative-free techniques that only access the objective function through an oracle. In this work, we design a novel algorithm in the context of min-max saddle point games where one sequentially updates the min and the max player. We prove convergence of this algorithm under mild assumptions, where the objective of the max-player satisfies the Polyak-?ojasiewicz (PL) condition, while the min-player is characterized by a nonconvex objective. Our method only assumes dynamically adjusted accurate estimates of the oracle with a fixed probability. To the best of our knowledge, our analysis is the first one to address the convergence of a direct-search method for min-max objectives in a stochastic setting.

Read more
Optimization And Control

Disordered high-dimensional optimal control

Mean field optimal control problems are a class of optimization problems that arise from optimal control when applied to the many body setting. In the noisy case one has a set of controllable stochastic processes and a cost function that is a functional of their trajectories. The goal of the optimization is to minimize this cost over the control variables. Here we consider the case in which we have N stochastic processes, or agents, with the associated control variables, which interact in a disordered way so that the resulting cost function is random. The goal is to find the average minimal cost for N?��? , when a typical realization of the quenched random interactions is considered. We introduce a simple model and show how to perform a dimensional reduction from the infinite dimensional case to a set of one dimensional stochastic partial differential equations of the Hamilton-Jacobi-Bellman and Fokker-Planck type. The statistical properties of the corresponding stochastic terms must be computed self-consistently, as we show explicitly.

Read more
Optimization And Control

Dissipativity and optimal control

The close link between dissipativity and optimal control is already apparent in Jan C. Willems' first papers on the subject. In recent years, research on this link has been revived with a particular focus on nonlinear problems and applications in model predictive control (MPC). This paper surveys these recent developments and some of Willems' and other authors' earlier results.

Read more
Optimization And Control

Dissipativity, reciprocity and passive network synthesis: from Jan Willems' seminal Dissipative Dynamical Systems papers to the present day

The dissipativity concept sits at the intersection of physics, systems theory and control engineering, as a natural generalisation of passive systems that dissipate energy. It relates properties of the external behavior of systems to their internal state, and connects such wide-ranging subjects as optimal control, algebraic Riccati equations, linear matrix inequalities, complex functions, and spectral factorization. Its applications include the analysis and design of interconnected systems (such as cyber-physical systems), robustness and the absolute stability problem (the passivity, small-gain, circle and Popov theorems and the theory of integral quadratic constraints), and network synthesis (of electrical, mechanical and multi-physics systems). In this article, we detail recent developments in the treatment of dissipativity theory for systems that are not necessarily controllable and that need not lend themselves naturally to an input-state-output perspective, drawing inspiration from the behavioral theory of Jan Willems and collaborators. Such systems are prevalent among physical systems, and we will illustrate the considered concepts using simple electric circuit examples.

Read more
Optimization And Control

Distributed Multi-Building Coordination for Demand Response

This paper presents a distributed optimization algorithm tailored for solving optimal control problems arising in multi-building coordination. The buildings coordinated by a grid operator, join a demand response program to balance the voltage surge by using an energy cost defined criterion. In order to model the hierarchical structure of the building network, we formulate a distributed convex optimization problem with separable objectives and coupled affine equality constraints. A variant of the Augmented Lagrangian based Alternating Direction Inexact Newton (ALADIN) method for solving the considered class of problems is then presented along with a convergence guarantee. To illustrate the effectiveness of the proposed method, we compare it to the Alternating Direction Method of Multipliers (ADMM) by running both an ALADIN and an ADMM based model predictive controller on a benchmark case study.

Read more
Optimization And Control

Distributed Networked Real-time Learning

Many machine learning algorithms have been developed under the assumption that data sets are already available in batch form. Yet in many application domains data is only available sequentially overtime via compute nodes in different geographic locations. In this paper, we consider the problem of learning a model when streaming data cannot be transferred to a single location in a timely fashion. In such cases, a distributed architecture for learning relying on a network of interconnected "local" nodes is required. We propose a distributed scheme in which every local node implements stochastic gradient updates based upon a local data stream. To ensure robust estimation, a network regularization penalty is used to maintain a measure of cohesion in the ensemble of models. We show the ensemble average approximates a stationary point and characterize the degree to which individual models differ from the ensemble average. We compare the results with federated learning to conclude the proposed approach is more robust to heterogeneity in data streams (data rates and estimation quality). We illustrate the results with an application to image classification with a deep learning model based upon convolutional neural networks.

Read more
Optimization And Control

Distributed Newton Optimization with Maximized Convergence Rate

The distributed optimization problem is set up in a collection of nodes interconnected via a communication network. The goal is to find the minimizer of a global objective function formed by the addition of partial functions locally known at each node. A number of methods are available for addressing this problem, having different advantages. The goal of this work is to achieve the maximum possible convergence rate. As the first step towards this end, we propose a new method which we show converges faster than other available options. As with most distributed optimization methods, convergence rate depends on a step size parameter. As the second step towards our goal we complement the proposed method with a fully distributed method for estimating the optimal step size that maximizes convergence speed. We provide theoretical guarantees for the convergence of the resulting method in a neighborhood of the solution. Also, for the case in which the global objective function has a single local minimum, we provide a different step size selection criterion together with theoretical guarantees for convergence. We present numerical experiments showing that, when using the same step size, our method converges significantly faster than its rivals. Experiments also show that the distributed step size estimation method achieves an asymptotic convergence rate very close to the theoretical maximum.

Read more
Optimization And Control

Distributed Optimization with Coupling Constraints

In this paper, we develop a novel distributed algorithm for addressing convex optimization with both nonlinear inequality and linear equality constraints, where the objective function can be a general nonsmooth convex function and all the constraints can be fully coupled. Specifically, we first separate the constraints into three groups, and design two primal-dual methods and utilize a virtual-queue-based method to handle each group of the constraints independently. Then, we integrate these three methods in a strategic way, leading to an integrated primal-dual proximal (IPLUX) algorithm, and enable the distributed implementation of IPLUX. We show that IPLUX achieves an O(1/k) rate of convergence in terms of optimality and feasibility, which is stronger than the convergence results of the state-of-the-art distributed algorithms for convex optimization with coupling nonlinear constraints. Finally, IPLUX exhibits competitive practical performance in the simulations.

Read more
Optimization And Control

Distributed Optimization with Coupling Constraints via Dual Proximal Gradient Method with Applications to Asynchronous Networks

In this paper, we consider solving a distributed optimization problem (DOP) with coupling constraints in a multi-agent network based on proximal gradient method. In this problem, each agent aims to minimize an individual cost function composed of both smooth and non-smooth parts. To this end, we derive the dual problem by the concept of Fenchel conjugate, which results in two kinds of dual problems: consensus based constrained and augmented unconstrained problems. In the first scenario, we propose a fully distributed dual proximal gradient (D-DPG) algorithm, where the agents can make updates only with the dual information of their neighbours and local step-sizes. Moreover, if the non-smooth parts of the objective functions are with certain simple structures, the agents only need to update dual variables with some simple operations, which can reduce the overall computational complexity. In the second scenario, an augmented dual proximal gradient (A-DPG) algorithm is proposed, which allows for the asymmetric interpretations of the global constraints for the agents and can be more efficient than D-DGP algorithm in some special-structured DOPs. Based on A-DPG algorithm, an asynchronous dual proximal gradient (Asyn-DPG) algorithm is proposed for the asynchronous networks where each agent updates its strategy with heterogenous step-size and possible outdated dual information of others. In all the discussed scenarios, analytical (ergodic) convergence rates are derived. The effectiveness of the proposed algorithms is verified by solving a social welfare optimization problem in the electricity market.

Read more
Optimization And Control

Distributed Zero-Order Optimization under Adversarial Noise

We study the problem of distributed zero-order optimization for a class of strongly convex functions. They are formed by the average of local objectives, associated to different nodes in a prescribed network of connections. We propose a distributed zero-order projected gradient descent algorithm to solve this problem. Exchange of information within the network is permitted only between neighbouring nodes. A key feature of the algorithm is that it can query only function values, subject to a general noise model, that does not require zero mean or independent errors. We derive upper bounds for the average cumulative regret and optimization error of the algorithm which highlight the role played by a network connectivity parameter, the number of variables, the noise level, the strong convexity parameter of the global objective and certain smoothness properties of the local objectives. When the bound is specified to the standard undistributed setting, we obtain an improvement over the state-of-the-art bounds, due to the novel gradient estimation procedure proposed here. We also comment on lower bounds and observe that the dependency over certain function parameters in the bound is nearly optimal.

Read more

Ready to get started?

Join us today