Featured Researches

Optimization And Control

A Sequential Learning Algorithm for Probabilistically Robust Controller Tuning

In this paper, we introduce a sequential learning algorithm to address a probabilistically robust controller tuning problem. The algorithm leverages ideas from the areas of randomised algorithms and ordinal optimisation, which have both been proposed to find approximate solutions for difficult design problems in control. We formally prove that our algorithm yields a controller which meets a specified probabilisitic performance specification, assuming a Gaussian or near-Gaussian copula model for the controller performances. Additionally, we are able to characterise the computational requirement of the algorithm by using a lower bound on the distribution function of the algorithm's stopping time. To validate our work, the algorithm is then demonstrated for the purpose of tuning model predictive controllers on a diesel engine air-path. It is shown that the algorithm is able to successfully tune a single controller to meet a desired performance threshold, even in the presence of uncertainty in the diesel engine model, that is inherent when a single representation is used across a fleet of vehicles.

Read more
Optimization And Control

A Solution for Large Scale Optimization Problems Based on Gravitational Search Algorithm

One of the challenges in optimization of high dimensional problems is finding appropriate solutions in a way that are as close as possible to the global optima. In this regard, one of the most common phenomena that occurs is the curse of dimensionality in which a large scale feature space generates more parameters that need to be estimated. Heuristic algorithms, such as Gravitational Search Algorithm, are among the tools proposed for optimizing large-scale problems, but in this case, they cannot solve the problem on their own. This paper proposes a novel method for optimizing large scale problems by improving the gravitational search algorithm's performance. In order to increase the efficiency of the gravitational search algorithm in solving large scale problems, the proposed method combines this algorithm with the cooperative-coevolution methods. For the evaluation of the performance of the proposed algorithm, we consider two approaches. In the first approach, the proposed algorithm is compared with the gravitational search algorithm, and in the second approach, it is compared with some of the most significant research in this field. In the first approach, our method was able to improve the performance of the original gravitational algorithm to solve large scale problems, and in the second one, the results indicate more favorable performance, in some benchmark functions, compared with other cooperative methods.

Read more
Optimization And Control

A Stochastic Multi-Agent Optimization Framework for Interdependent Transportation and Power System Analyses

We study the interdependence between transportation and power systems considering decentralized renewable generators and electric vehicles (EVs). We formulate the problem in a stochastic multi-agent optimization framework considering the complex interactions between EV/conventional vehicle drivers, \revi{renewable}/conventional generators, and independent system operators, with locational electricity and charging prices endogenously determined by markets. We show that the multi-agent optimization problems can be reformulated as a single convex optimization problem and prove the existence and uniqueness of the equilibrium. To cope with the curse of dimensionality, we propose ADMM-based decomposition algorithm to facilitate parallel computing. Numerical insights are generated using standard test systems in transportation and power system literature.

Read more
Optimization And Control

A Sublevel Moment-SOS Hierarchy for Polynomial Optimization

We introduce a sublevel Moment-SOS hierarchy where each SDP relaxation can be viewed as an intermediate (or interpolation) between the d-th and (d+1)-th order SDP relaxations of the Moment-SOS hierarchy (dense or sparse version). With the flexible choice of determining the size (level) and number (depth) of subsets in the SDP relaxation, one is able to obtain different improvements compared to the d-th order relaxation, based on the machine memory capacity. In particular, we provide numerical experiments for d=1 and various types of problems both in combinatorial optimization (Max-Cut, Mixed Integer Programming) and deep learning (robustness certification, Lipschitz constant of neural networks), where the standard Lasserre's relaxation (or its sparse variant) is computationally intractable. In our numerical results, the lower bounds from the sublevel relaxations improve the bound from Shor's relaxation (first order Lasserre's relaxation) and are significantly closer to the optimal value or to the best-known lower/upper bounds.

Read more
Optimization And Control

A Time-Inconsistent Dynkin Game: from Intra-personal to Inter-personal Equilibria

This paper studies a nonzero-sum Dynkin game in discrete time under non-exponential discounting. For both players, there are two levels of game-theoretic reasoning intertwined. First, each player looks for an intra-personal equilibrium among her current and future selves, so as to resolve time inconsistency triggered by non-exponential discounting. Next, given the other player's chosen stopping policy, each player selects a best response among her intra-personal equilibria. A resulting inter-personal equilibrium is then a Nash equilibrium between the two players, each of whom employs her best intra-personal equilibrium with respect to the other player's stopping policy. Under appropriate conditions, we show that an inter-personal equilibrium exists, based on concrete iterative procedures along with Zorn's lemma. To illustrate our theoretic results, we investigate a two-player real options valuation problem: two firms negotiate a deal of cooperation to initiate a project jointly. By deriving inter-personal equilibria explicitly, we find that coercive power in negotiation depends crucially on the impatience levels of the two firms.

Read more
Optimization And Control

A Variational Formulation of Accelerated Optimization on Riemannian Manifolds

It was shown recently by Su et al. (2016) that Nesterov's accelerated gradient method for minimizing a smooth convex function f can be thought of as the time discretization of a second-order ODE, and that f(x(t)) converges to its optimal value at a rate of O(1/ t 2 ) along any trajectory x(t) of this ODE. A variational formulation was introduced in Wibisono et al. (2016) which allowed for accelerated convergence at a rate of O(1/ t p ) , for arbitrary p>0 , in normed vector spaces. This framework was exploited in Duruisseaux et al. (2020) to design efficient explicit algorithms for symplectic accelerated optimization. In Alimisis et al. (2020), a second-order ODE was proposed as the continuous-time limit of a Riemannian accelerated algorithm, and it was shown that the objective function f(x(t)) converges to its optimal value at a rate of O(1/ t 2 ) along solutions of this ODE. In this paper, we show that on Riemannian manifolds, the convergence rate of f(x(t)) to its optimal value can also be accelerated to an arbitrary convergence rate O(1/ t p ) , by considering a family of time-dependent Bregman Lagrangian and Hamiltonian systems on Riemannian manifolds. This generalizes the results of Wibisono et al. (2016) to Riemannian manifolds and also provides a variational framework for accelerated optimization on Riemannian manifolds. An approach based on the time-invariance property of the family of Bregman Lagrangians and Hamiltonians was used to construct very efficient optimization algorithms in Duruisseaux et al. (2020), and we establish a similar time-invariance property in the Riemannian setting. One expects that a geometric numerical integrator that is time-adaptive, symplectic, and Riemannian manifold preserving will yield a class of promising optimization algorithms on manifolds.

Read more
Optimization And Control

A Warped Resolvent Algorithm to Construct Nash Equilibria

We propose an asynchronous block-iterative decomposition algorithm to solve Nash equilibrium problems involving a mix of nonsmooth and smooth functions acting on linear mixtures of strategies. The methodology relies heavily on monotone operator theory and in particular on warped resolvents.

Read more
Optimization And Control

A Zero-Sum Deterministic Impulse Controls Game in Infinite Horizon with a New HJBI QVI

In the present paper, we study a two-player zero-sum deterministic differential game with both players adopting impulse controls, in infinite time horizon, under rather weak assumptions on the cost functions. We prove by means of the dynamic programming principle (DPP) that the lower and upper value functions are continuous and viscosity solutions to the corresponding Hamilton-Jacobi-Bellman-Isaacs (HJBI) quasi-variational inequality (QVI). We define a new HJBI QVI for which, under a proportional property assumption on the maximizer cost, the value functions are the unique viscosity solution. We then prove that the lower and upper value functions coincide.

Read more
Optimization And Control

A Zeroth-Order Block Coordinate Descent Algorithm for Huge-Scale Black-Box Optimization

We consider the zeroth-order optimization problem in the huge-scale setting, where the dimension of the problem is so large that performing even basic vector operations on the decision variables is infeasible. In this paper, we propose a novel algorithm, coined ZO-BCD, that exhibits favorable overall query complexity and has a much smaller per-iteration computational complexity. In addition, we discuss how the memory footprint of ZO-BCD can be reduced even further by the clever use of circulant measurement matrices. As an application of our new method, we propose the idea of crafting adversarial attacks on neural network based classifiers in a wavelet domain, which can result in problem dimensions of over 1.7 million. In particular, we show that crafting adversarial examples to audio classifiers in a wavelet domain can achieve the state-of-the-art attack success rate of 97.9%.

Read more
Optimization And Control

A dependence of the cost of fast controls for the heat equation on the support of initial datum

The controllability cost for the heat equation as the control time T goes to 0 is well-known of the order e C/T for some positive constant C , depending on the controlled domain and for all initial datum. In this paper, we prove that the constant C can be chosen to be arbitrarily small if the support of the initial data is sufficiently close to the controlled domain, but not necessary inside the controlled domain. The proof is in the spirit on Lebeau and Robbiano's approach in which a new spectral inequality is established. The main ingredient of the proof of the new spectral inequality is three-sphere inequalities with partial data.

Read more

Ready to get started?

Join us today