Featured Researches

Optimization And Control

Improved Exploiting Higher Order Smoothness in Derivative-free Optimization and Continuous Bandit

We consider β -smooth (satisfies the generalized Holder condition with parameter β>2 ) stochastic convex optimization problem with zero-order one-point oracle. The best known result was arXiv:2006.07862: E[f( x ¯ ¯ ¯ N )?�f( x ??)]= O ~ ??????n 2 γ N β?? β ??????in γ -strongly convex case, where n is the dimension. In this paper we improve this bound: E[f( x ¯ ¯ ¯ N )?�f( x ??)]= O ~ ??????n 2??1 β γ N β?? β ??????.

Read more
Optimization And Control

Inadequacy of Linear Methods for Minimal Sensor Placement and Feature Selection in Nonlinear Systems; a New Approach Using Secants

Sensor placement and feature selection are critical steps in engineering, modeling, and data science that share a common mathematical theme: the selected measurements should enable solution of an inverse problem. Most real-world systems of interest are nonlinear, yet the majority of available techniques for feature selection and sensor placement rely on assumptions of linearity or simple statistical models. We show that when these assumptions are violated, standard techniques can lead to costly over-sensing without guaranteeing that the desired information can be recovered from the measurements. In order to remedy these problems, we introduce a novel data-driven approach for sensor placement and feature selection for a general type of nonlinear inverse problem based on the information contained in secant vectors between data points. Using the secant-based approach, we develop three efficient greedy algorithms that each provide different types of robust, near-minimal reconstruction guarantees. We demonstrate them on two problems where linear techniques consistently fail: sensor placement to reconstruct a fluid flow formed by a complicated shock-mixing layer interaction and selecting fundamental manifold learning coordinates on a torus.

Read more
Optimization And Control

Indices of equilibrium points of linear control systems with saturated state feedback

In this paper we investigate some properties of equilibrium points in n-dimensional linear control systems with saturated state feedback. We provide an index formula for equilibrium points and discuss its relation to boundaries of attraction basins in feedback systems with single input. In addition, we also touch upon convexity of attraction basin.

Read more
Optimization And Control

Inexact gradient projection method with relative error tolerance

A gradient projection method with feasible inexact projections is proposed in the present paper. The inexact projection is performed using a relative error tolerance. Asymptotic convergence analysis and iteration-complexity bounds of the method employing constant and Armijo step sizes are presented. Numerical results are reported illustrating the potential advantages of considering inexact projections instead of exact ones in some medium scale instances of a least squares problem over the spectrohedron.

Read more
Optimization And Control

Infeasibility detection with primal-dual hybrid gradient for large-scale linear programming

We study the problem of detecting infeasibility of large-scale linear programming problems using the primal-dual hybrid gradient method (PDHG) of Chambolle and Pock (2011). The literature on PDHG has mostly focused on settings where the problem at hand is assumed to be feasible. When the problem is not feasible, the iterates of the algorithm do not converge. In this scenario, we show that the iterates diverge at a controlled rate towards a well-defined ray. The direction of this ray is known as the infimal displacement vector v . The first contribution of our work is to prove that this vector recovers certificates of primal and dual infeasibility whenever they exist. Based on this fact, we propose a simple way to extract approximate infeasibility certificates from the iterates of PDHG. We study three different sequences that converge to the infimal displacement vector: the difference of iterates, the normalized iterates, and the normalized average. All of them are easy to compute, and thus the approach is suitable for large-scale problems. Our second contribution is to establish tight convergence rates for these sequences. We demonstrate that the normalized iterates and the normalized average achieve a convergence rate of O(1/k) , improving over the known rate of O(1/ k ??????) . This rate is general and applies to any fixed-point iteration of a nonexpansive operator. Thus, it is a result of independent interest since it covers a broad family of algorithms, including, for example, ADMM, and can be applied settings beyond linear programming, such as quadratic and semidefinite programming. Further, in the case of linear programming we show that, under nondegeneracy assumptions, the iterates of PDHG identify the active set of an auxiliary feasible problem in finite time, which ensures that the difference of iterates exhibits eventual linear convergence to the infimal displacement vector.

Read more
Optimization And Control

Instability of Martingale optimal transport in dimension d ??2

Stability of the value function and the set of minimizers w.r.t. the given data is a desirable feature of optimal transport problems. For the classical Kantorovich transport problem, stability is satisfied under mild assumptions and in general frameworks such as the one of Polish spaces. However, for the martingale transport problem several works based on different strategies established stability results for R only. We show that the restriction to dimension d = 1 is not accidental by presenting a sequence of marginal distributions on R 2 for which the martingale optimal transport problem is neither stable w.r.t. the value nor the set of minimizers. Our construction adapts to any dimension d ??2. For d ??2 it also provides a contradiction to the martingale Wasserstein inequality established by Jourdain and Margheriti in d = 1.

Read more
Optimization And Control

Invariants of linear control systems with analytic matrices and the linearizability problem

The paper continues the authors' study of the linearizability problem for nonlinear control systems. In the recent work [K. Sklyar, Systems Control Lett. 134 (2019), 104572], conditions on mappability of a nonlinear control system to a preassigned linear system with analytic matrices were obtained. In the present paper we solve more general problem on linearizability conditions without indicating a target linear system. To this end, we give a description of invariants for linear non-autonomous single-input controllable systems with analytic matrices, which allow classifying such systems up to transformations of coordinates. This study leads to one problem from the theory of linear ordinary differential equations with meromorphic coefficients. As a result, we obtain a criterion for mappability of nonlinear control systems to linear control systems with analytic matrices.

Read more
Optimization And Control

Iterated Greedy Algorithms for a Complex Parallel Machine Scheduling Problem

This paper addresses a complex parallel machine scheduling problem with jobs divided into operations and operations grouped in families. Non-anticipatory family setup times are held at the beginning of each batch, defined by the combination of one setup-time and a sequence of operations from a unique family. Other aspects are also considered in the problem, such as release dates for operations and machines, operation's sizes, and machine's eligibility and capacity. We consider item availability to define the completion times of the operations within the batches, to minimize the total weighted completion time of jobs. We developed Iterated Greedy (IG) algorithms combining destroy and repair operators with a Random Variable Neighborhood Descent (RVND) local search procedure, using four neighborhood structures to solve the problem. The best algorithm variant outperforms the current literature methods for the problem, in terms of average deviation for the best solutions and computational times, in a known benchmark set of 72 instances. New upper bounds are also provided for some instances within this set. Besides, computational experiments are conducted to evaluate the proposed methods' performance in a more challenging set of instances introduced in this work. Two IG variants using a greedy repair operator showed superior performance with more than 70% of the best solutions found uniquely by these variants. Despite the simplicity, the method using the most common destruction and repair operators presented the best results in different evaluated criteria, highlighting its potential and applicability in solving a complex machine scheduling problem.

Read more
Optimization And Control

Iterative Rational Krylov Algorithms for model reduction of a class of constrained structural dynamic system with Engineering applications

This paper discusses model order reduction of large sparse second-order index-3 differential algebraic equations (DAEs) by applying Iterative Rational Krylov Algorithm (IRKA). In general, such DAEs arise in constraint mechanics, multibody dynamics, mechatronics and many other branches of sciences and technologies. By deecting the algebraic equations the second-order index-3 system can be altered into an equivalent standard second-order system. This can be done by projecting the system onto the null space of the constraint matrix. However, creating the projector is computationally expensive and it yields huge bottleneck during the implementation. This paper shows how to find a reduce order model without projecting the system onto the null space of the constraint matrix explicitly. To show the efficiency of the theoretical works we apply them to several data of second-order index-3 models and experimental resultants are discussed in the paper.

Read more
Optimization And Control

Joint Continuous and Discrete Model Selection via Submodularity

In model selection problems for machine learning, the desire for a well-performing model with meaningful structure is typically expressed through a regularized optimization problem. In many scenarios, however, the meaningful structure is specified in some discrete space, leading to difficult nonconvex optimization problems. In this paper, we relate the model selection problem with structure-promoting regularizers to submodular function minimization defined with continuous and discrete arguments. In particular, we leverage submodularity theory to identify a class of these problems that can be solved exactly and efficiently with an agnostic combination of discrete and continuous optimization routines. We show how simple continuous or discrete constraints can also be handled for certain problem classes, motivated by robust optimization. Finally, we numerically validate our theoretical results with several proof-of-concept examples, comparing against state-of-the-art algorithms.

Read more

Ready to get started?

Join us today