Featured Researches

Optimization And Control

Generalized Damped Newton Algorithms in Nonsmooth Optimization with Applications to Lasso Problems

The paper proposes and develops new globally convergent algorithms of the generalized damped Newton type for solving important classes of nonsmooth optimization problems. These algorithms are based on the theory and calculations of second-order subdifferentials of nonsmooth functions with employing the machinery of second-order variational analysis and generalized differentiation. First we develop a globally superlinearly convergent damped Newton-type algorithm for the class of continuously differentiable functions with Lipschitzian gradients, which are nonsmooth of second order. Then we design such a globally convergent algorithm to solve a class of nonsmooth convex composite problems with extended-real-valued cost functions, which typically arise in machine learning and statistics. Finally, the obtained algorithmic developments and justifications are applied to solving a major class of Lasso problems with detailed numerical implementations. We present the results of numerical experiments and compare the performance of our main algorithm applied to Lasso problems with those achieved by other first-order and second-order methods.

Read more
Optimization And Control

Generalized Leibniz rules and Lipschitzian stability for expected-integral mappings

This paper is devoted to the study of the expected-integral multifunctions given in the form E Φ (x):= ??T Φ t (x)dμ, where Φ:T? R n ??R m is a set-valued mapping on a measure space (T,A,μ) . Such multifunctions appear in applications to stochastic programming, which require developing efficient calculus rules of generalized differentiation. Major calculus rules are developed in this paper for coderivatives of multifunctions E Φ and second-order subdifferentials of the corresponding expected-integral functionals with applications to constraint systems arising in stochastic programming. The paper is self-contained with presenting in the preliminaries some needed results on sequential first-order subdifferential calculus of expected-integral functionals taken from the first paper of this series.

Read more
Optimization And Control

Generalized Necessary and Sufficient Robust Boundedness Results for Feedback Systems

Classical conditions for ensuring the robust stability of a system in feedback with a nonlinearity include passivity, small gain, circle, and conicity theorems. We present a generalized and unified version of these results in an arbitrary semi-inner product space, which avoids many of the technicalities that arise when working in traditional extended spaces. Our general formulation clarifies when the sufficient conditions for robust stability are also necessary, and we show how to construct worst-case scenarios when the sufficient conditions fail to hold. Finally, we show how our general result can be specialized to recover a wide variety of existing results, and explain how properties such as boundedness, causality, linearity, and time-invariance emerge as a natural consequence.

Read more
Optimization And Control

Generative deep learning for decision making in gas networks

A decision support system relies on frequent re-solving of similar problem instances. While the general structure remains the same in corresponding applications, the input parameters are updated on a regular basis. We propose a generative neural network design for learning integer decision variables of mixed-integer linear programming (MILP) formulations of these problems. We utilise a deep neural network discriminator and a MILP solver as our oracle to train our generative neural network. In this article, we present the results of our design applied to the transient gas optimisation problem. With the trained network we produce a feasible solution in 2.5s, use it as a warm-start solution, and thereby decrease global optimal solution solve time by 60.5%.

Read more
Optimization And Control

Geometric Heat Flow Method for Legged Locomotion Planning

We propose in this paper a motion planning method for legged robot locomotion based on Geometric Heat Flow framework. The motion planning task is challenging due to the hybrid nature of dynamics and contact constraints. We encode the hybrid dynamics and constraints into Riemannian inner product, and this inner product is defined so that short curves correspond to admissible motions for the system. We rely on the affine geometric heat flow to deform an arbitrary path connecting the desired initial and final states to this admissible motion. The method is able to automatically find the trajectory of robot's center of mass, feet contact positions and forces on uneven terrain.

Read more
Optimization And Control

Geometric control of algebraic systems

In this paper, we present a geometric approach for computing the controlled invariant set of a continuous-time control system. While the problem is well studied for in the ellipsoidal case, this family is quite conservative for constrained or switched linear systems. We reformulate the invariance of a set as an inequality for its support function that is valid for any convex set. This produces novel algebraic conditions for the invariance of sets with polynomial or piecewise quadratic support function. We compare it with the common algebraic approach for polynomial sublevel sets and show that it is significantly more conservative than our method.

Read more
Optimization And Control

Geometry of Cascade Feedback Linearizable Control Systems

In this thesis, we provide new insights into the theory of cascade feedback linearization of control systems. In particular, we present a new explicit class of cascade feedback linearizable control systems, as well as a new obstruction to the existence of a cascade feedback linearization for a given invariant control system. These theorems are presented in Chapter 4, where truncated versions of operators from the calculus of variations are introduced and explored to prove these new results. This connection reveals new geometry behind cascade feedback linearization and establishes a foundation for future exciting work on the subject with important consequences for dynamic feedback linearization.

Read more
Optimization And Control

Global Optimisation in Hilbert Spaces using the Survival of the Fittest Algorithm

Global optimisation problems in high-dimensional and infinite dimensional spaces arise in various real-world applications such as engineering, economics, geophysics, biology, machine learning, optimal control, etc. Among stochastic approaches to global optimisation, biology-inspired methods are currently popular in the literature, imitating natural ecological and evolutionary processes and reported to be efficient in many practical study cases. However, many of bio-inspired methods have some vital drawbacks. Due to their semi-empirical nature, convergence to the globally optimal solution cannot always be guaranteed and struggles with the high dimensionality of space, showing a slow convergence. Here, we present a bio-inspired global stochastic optimisation method, applicable in Hilbert function spaces, inspired by Darwin's' famous idea of the survival of the fittest, therefore, referred to as the `Survival of the Fittest Algorithm' (SoFA). Mathematically, the convergence of SoFA is a consequence of a fundamental property of localisation of probabilistic measure in a Hilbert space, we rigorously prove the convergence of the introduced algorithm for a generic class of functionals. As an insightful real-world problem, we apply SoFA to find globally optimal trajectories for daily vertical migrations of zooplankton in ocean and lakes, considered to be the largest synchronised movement of biomass on Earth. We maximise fitness in a function space derived from a von-Foerster stage-structured population model with biologically realistic parameters. We show that for problems of fitness maximisation in high-dimensional spaces, SoFA performs better than some other stochastic global optimisation algorithms. We highlight the links between the new optimisation algorithm and natural selection process in ecosystems occurring within a population via gradual exclusion of competitive con-specific strains.

Read more
Optimization And Control

Gossip over Holonomic Graphs

A gossip process is an iterative process in a multi-agent system where only two neighboring agents communicate at each iteration and update their states. The neighboring condition is by convention described by an undirected graph. In this paper, we consider a general update rule whereby each agent takes an arbitrary weighted average of its and its neighbor's current states. In general, the limit of the gossip process (if it converges) depends on the order of iterations of the gossiping pairs. The main contribution of the paper is to provide a necessary and sufficient condition for convergence of the gossip process that is independent of the order of iterations. This result relies on the introduction of the novel notion of holonomy of local stochastic matrices for the communication graph. We also provide complete characterizations of the limit and the space of holonomic stochastic matrices over the graph.

Read more
Optimization And Control

Graph topology invariant gradient and sampling complexity for decentralized and stochastic optimization

One fundamental problem in decentralized multi-agent optimization is the trade-off between gradient/sampling complexity and communication complexity. We propose new algorithms whose gradient and sampling complexities are graph topology invariant while their communication complexities remain optimal. For convex smooth deterministic problems, we propose a primal dual sliding (PDS) algorithm that computes an ϵ -solution with O(( L ~ /ϵ ) 1/2 ) gradient and O(( L ~ /ϵ ) 1/2 +?�A??ϵ) communication complexities, where L ~ is the smoothness parameter of the objective and A is related to either the graph Laplacian or the transpose of the oriented incidence matrix of the communication network. The results can be improved to O(( L ~ /μ ) 1/2 log(1/ϵ)) and O(( L ~ /μ ) 1/2 log(1/ϵ)+?�A?? ϵ 1/2 ) respectively with μ -strong convexity. We also propose a stochastic variant, the primal dual sliding (SPDS) algorithm for problems with stochastic gradients. The SPDS algorithm utilizes the mini-batch technique and enables the agents to perform sampling and communication simultaneously. It computes a stochastic ϵ -solution with O(( L ~ /ϵ ) 1/2 +(?/ϵ ) 2 ) sampling complexity, which can be improved to O(( L ~ /μ ) 1/2 log(1/ϵ)+ ? 2 /ϵ) with strong convexity. Here ? 2 is the variance. The communication complexities of SPDS remain the same as that of the deterministic case. All the aforementioned gradient and sampling complexities match the lower complexity bounds for centralized convex smooth optimization and are independent of the network structure. To the best of our knowledge, these gradient and sampling complexities have not been obtained before for decentralized optimization over a constraint feasible set.

Read more

Ready to get started?

Join us today