Featured Researches

Optimization And Control

Accelerated Proximal Envelopes: Application to the Coordinate Descent Method

This article is devoted to one particular case of using universal accelerated proximal envelopes to obtain computationally efficient accelerated versions of methods used to solve various optimization problem setups. In this paper, we propose a proximally accelerated coordinate descent method that achieves the efficient algorithmic complexity of iteration and allows one to take advantage of the problem sparseness. An example of applying the proposed approach to optimizing a SoftMax-like function considered, for which the described method allowing weaken the dependence of the computational complexity on the dimension of the problem n in O( n ??????) times, and in practice demonstrates a faster convergence in comparison with standard methods.

Read more
Optimization And Control

Accelerated, Optimal, and Parallel: Some Results on Model-Based Stochastic Optimization

We extend the Approximate-Proximal Point (aProx) family of model-based methods for solving stochastic convex optimization problems, including stochastic subgradient, proximal point, and bundle methods, to the minibatch and accelerated setting. To do so, we propose specific model-based algorithms and an acceleration scheme for which we provide non-asymptotic convergence guarantees, which are order-optimal in all problem-dependent constants and provide linear speedup in minibatch size, while maintaining the desirable robustness traits (e.g. to stepsize) of the aProx family. Additionally, we show improved convergence rates and matching lower bounds identifying new fundamental constants for "interpolation" problems, whose importance in statistical machine learning is growing; this, for example, gives a parallelization strategy for alternating projections. We corroborate our theoretical results with empirical testing to demonstrate the gains accurate modeling, acceleration, and minibatching provide.

Read more
Optimization And Control

Accelerating Derivative-Free Optimization with Dimension Reduction and Hyperparameter Learning

We consider convex, black-box objective functions with additive or multiplicative noise with a high-dimensional parameter space and a data space of lower dimension, where gradients of the map exist, but may be inaccessible. We investigate Derivative-Free Optimization (DFO) in this setting and propose a novel method, Active STARS (ASTARS), based on STARS (Chen and Wild, 2015) and dimension reduction in parameter space via Active Subspace (AS) methods (Constantine, 2015). STARS hyperparmeters are inversely proportional to the known dimension of parameter space, resulting in heavy smoothing and small step sizes for large dimensions. When possible, ASTARS leverages a lower-dimensional AS, defining a set of directions in parameter space causing the majority of the variance in function values. ASTARS iterates are updated with steps only taken in the AS, reducing the value of the objective function more efficiently than STARS, which updates iterates in the full parameter space. Computational costs may be reduced further by learning ASTARS hyperparameters and the AS, reducing the total evaluations of the objective function and eliminating the requirement that the user specify hyperparameters, which may be unknown in our setting. We call this method Fully Automated ASTARS (FAASTARS). We show that STARS and ASTARS will both converge -- with a certain complexity -- even with inexact, estimated hyperparemters. We also find that FAASTARS converges with the use of estimated AS's and hyperparameters. We explore the effectiveness of ASTARS and FAASTARS in numerical examples which compare ASTARS and FAASTARS to STARS.

Read more
Optimization And Control

Acceleration Methods

This monograph covers some recent advances on a range of acceleration techniques frequently used in convex optimization. We first use quadratic optimization problems to introduce two key families of methods, momentum and nested optimization schemes, which coincide in the quadratic case to form the Chebyshev method whose complexity is analyzed using Chebyshev polynomials. We discuss momentum methods in detail, starting with the seminal work of Nesterov (1983) and structure convergence proofs using a few master templates, such as that of \emph{optimized gradient methods} which have the key benefit of showing how momentum methods maximize convergence rates. We further cover proximal acceleration techniques, at the heart of the \emph{Catalyst} and \emph{Accelerated Hybrid Proximal Extragradient} frameworks, using similar algorithmic patterns. Common acceleration techniques directly rely on the knowledge of some regularity parameters of the problem at hand, and we conclude by discussing \emph{restart} schemes, a set of simple techniques to reach nearly optimal convergence rates while adapting to unobserved regularity parameters.

Read more
Optimization And Control

Actor-Critic Method for High Dimensional Static Hamilton--Jacobi--Bellman Partial Differential Equations based on Neural Networks

We propose a novel numerical method for high dimensional Hamilton--Jacobi--Bellman (HJB) type elliptic partial differential equations (PDEs). The HJB PDEs, reformulated as optimal control problems, are tackled by the actor-critic framework inspired by reinforcement learning, based on neural network parametrization of the value and control functions. Within the actor-critic framework, we employ a policy gradient approach to improve the control, while for the value function, we derive a variance reduced least square temporal difference method (VR-LSTD) using stochastic calculus. To numerically discretize the stochastic control problem, we employ an adaptive stepsize scheme to improve the accuracy near the domain boundary. Numerical examples up to 20 spatial dimensions including the linear quadratic regulators, the stochastic Van der Pol oscillators, and the diffusive Eikonal equations are presented to validate the effectiveness of our proposed method.

Read more
Optimization And Control

Admission Control for Double-ended Queues

We consider a controlled double-ended queue consisting of two classes of customers, labeled sellers and buyers. The sellers and buyers arrive in a trading market according to two independent renewal processes. Whenever there is a seller and buyer pair, they are matched and leave the system instantaneously. The matching follows first-come-first-match service discipline. Those customers who cannot be matched immediately need to wait in the designated queue, and they are assumed to be impatient with generally distributed patience times. The control problem is concerned with the tradeoff between blocking and abandonment, and its objective is to choose optimal queue-capacities (buffer lengths) for sellers and buyers to minimize an infinite horizon discounted linear cost functional which consists of holding costs, and penalty costs for blocking and abandonment. When the arrival intensities of both customer classes tend to infinity in concert, we use a heavy traffic approximation to formulate an approximate diffusion control problem (DCP), and derive an optimal threshold policy for the DCP. Finally, we employ the DCP solution to establish an easy-to-implement, simple asymptotically optimal threshold policy for the original queueing control problem.

Read more
Optimization And Control

Adversarial Resilience for Sampled-Data Systems under High-Relative-Degree Safety Constraints

Control barrier functions (CBFs) have recently become a powerful method for rendering desired safe sets forward invariant in single- and multi-agent systems. In the multi-agent case, prior literature has considered scenarios where all agents cooperate to ensure that the corresponding set remains invariant. However, these works do not consider scenarios where a subset of the agents are behaving adversarially with the intent to violate safety bounds. In addition, prior results on multi-agent CBFs typically assume that control inputs are continuous and do not consider sampled-data dynamics. This paper presents a framework for normally-behaving agents in a multi-agent system with heterogeneous control-affine, sampled-data dynamics to render a safe set forward invariant in the presence of adversarial agents. The proposed approach considers several aspects of practical control systems including input constraints, clock asynchrony and disturbances, and distributed calculation of control inputs. Our approach also considers functions describing safe sets having high relative degree with respect to system dynamics. The efficacy of these results are demonstrated through simulations.

Read more
Optimization And Control

Aggregation functions on n-dimensional ordered vectors equipped with an admissible order and an application in multi-criteria group decision-making

n -Dimensional fuzzy sets are a fuzzy set extension where the membership values are n-tuples of real numbers in the unit interval [0,1] increasingly ordered, called n-dimensional intervals. The set of n-dimensional intervals is denoted by L n ([0,1]) . This paper aims to investigate semi-vector spaces over a weak semifield and aggregation functions concerning an admissible order on the set of n -dimensional intervals and the construction of aggregation functions on L n ([0,1]) based on the operations of the semi-vector spaces. In particular, extensions of the family of OWA and weighted average aggregation functions are investigated. Finally, we develop a multi-criteria group decision-making method based on n-dimensional aggregation functions with respect to an admissible order and give an illustrative example.

Read more
Optimization And Control

Algorithms for optimal control of hybrid systems with sliding motion

This paper concerns two algorithms for solving optimal control problems with hybrid systems. The first algorithm aims at hybrid systems exhibiting sliding modes. The first algorithm has several features which distinguishes it from the other algorithms for problems described by hybrid systems. First of all, it can cope with hybrid systems which exhibit sliding modes. Secondly, the systems motion on the switching surface is described by index 2 differential--algebraic equations and that guarantees accurate tracking of the sliding motion surface. Thirdly, the gradients of the problems functionals are evaluated with the help of adjoint equations. The adjoint equations presented in the paper take into account sliding motion and exhibit jump conditions at transition times. We state optimality conditions in the form of the weak maximum principle for optimal control problems with hybrid systems exhibiting sliding modes and with piecewise differentiable controls. The second algorithm is for optimal control problems with hybrid systems which do not exhibit sliding motion. In the case of this algorithm we assume that control functions are measurable functions. For each algorithm, we show that every accumulation point of the sequence generated by the algorithm satisfies the weak maximum principle.

Read more
Optimization And Control

Alternating Direction Method of Multipliers for Quantization

Quantization of the parameters of machine learning models, such as deep neural networks, requires solving constrained optimization problems, where the constraint set is formed by the Cartesian product of many simple discrete sets. For such optimization problems, we study the performance of the Alternating Direction Method of Multipliers for Quantization ( ADMM-Q ) algorithm, which is a variant of the widely-used ADMM method applied to our discrete optimization problem. We establish the convergence of the iterates of ADMM-Q to certain stationary points . To the best of our knowledge, this is the first analysis of an ADMM-type method for problems with discrete variables/constraints. Based on our theoretical insights, we develop a few variants of ADMM-Q that can handle inexact update rules, and have improved performance via the use of "soft projection" and "injecting randomness to the algorithm". We empirically evaluate the efficacy of our proposed approaches.

Read more

Ready to get started?

Join us today