Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Van Sy Mai is active.

Publication


Featured researches published by Van Sy Mai.


advances in computing and communications | 2016

Distributed optimization over weighted directed graphs using row stochastic matrix

Van Sy Mai; Eyad H. Abed

This paper deals with an optimization problem over a network of agents, in which the cost function is the sum of the individual objectives of the agents. The underlying communication graph is assumed to be directed and the weight matrix to be only row stochastic. A distributed projected subgradient algorithm is presented that allows the agents to solve the problem under the conditions that the network is fixed and the cost functions are convex and Lipschitz continuous.


conference on decision and control | 2014

Opinion Dynamics with Persistent Leaders

Van Sy Mai; Eyad H. Abed

This paper revisits the problem of agreement seeking in a social network under the influence of leaders. The persistence of the leaders effect on the opinions of other agents is characterized by the total weight that they place on the leaders information over time. If this weight is infinite, then the leader is called persistent. Our results describe the asymptotic behavior of network opinions towards the state of a persistent leader in both cases of networks with fixed and switching topologies. We also show that only persistent leaders are able to drive the network to the leaders constant state.


advances in computing and communications | 2017

Consensus prediction in minimum time

Van Sy Mai; Eyad H. Abed

This paper studies an observer that seeks to predict in minimal time the asymptotic agreement value of the agents in a network. The network is governed by the DeGroot opinion dynamics model. The observer can monitor the opinions of a group of agents, but might not have accurate knowledge of the underlying communication graph and the associated weight matrix. The work makes use of and builds on previous work on finite time consensus to address this prediction problem. In particular, for the case of a single observed agent, a tight lower bound on the monitoring time is determined below which the observer with limited knowledge about the network is not able to determine the consensus value regardless of the method used. This minimal prediction time can be achieved by employing the minimal polynomial associated with this observed agent. Next, for the general case of an observer with access to multiple agents, a similar bound is conjectured, and we develop algorithms toward achieving this bound through local observations and computations.


IEEE Transactions on Control of Network Systems | 2017

Local Prediction for Enhanced Convergence of Distributed Optimization Algorithms

Eyad H. Abed; Van Sy Mai

This paper studies distributed optimization problems where a network of agents seeks to minimize the sum of their private cost functions. We propose new algorithms based on the distributed subgradient method and the finite-time consensus protocol introduced by Sundaram and Hadjicostis (2007). In our first algorithm, the local optimization variables are updated cyclically through a subgradient step while the consensus variables follow a usual consensus protocol periodically interrupted by a predictive consensus estimate reset operation. For convex cost functions with bounded subgradients, this algorithm is guaranteed to converge to a certain range of the optimal value if using a constant step size or to the optimal value if a diminishing step size is in place. For differentiable cost functions whose sum is convex and has a Lipschitz continuous gradient, convergence to the optimal value can be ensured when using a constant step size, even if some of the individual cost functions are nonconvex. In addition, exponential convergence to the optimal solution is achieved when the global cost function is further assumed to be strongly convex. In these cases, the local optimization variables reach consensus in finite time, and then behave as they would under the centralized subgradient method applied to the global problem, except on a slower time scale. The second algorithm is specialized for the case of quadratic cost functions and converges in finite time to the optimal solution. Simulation examples are given to illustrate the algorithms.


IEEE Transactions on Automatic Control | 2018

Linear Convergence in Optimization Over Directed Graphs With Row-Stochastic Matrices

Chenguang Xi; Van Sy Mai; Ran Xin; Eyad H. Abed; Usman A. Khan


SIAM Conf. on Control and its Applications | 2015

Convex Methods for Rank-Constrained Optimization Problems.

Van Sy Mai; Dipankar Maity; Bhaskar Ramasubramanian; Michael Rotkowitz


arXiv: Optimization and Control | 2018

Distributed Optimization over Directed Graphs with Row Stochasticity and Constraint Regularity.

Van Sy Mai; Eyad H. Abed


arXiv: Optimization and Control | 2018

Optimal Cache Allocation for Named Data Caching under Network-Wide Capacity Constraint

Van Sy Mai; Stratis Ioannidis; Davide Pesavento; Lotfi Benmohamed


IEEE Transactions on Automatic Control | 2018

Optimizing Leader Influence in Networks through Selection of Direct Followers

Van Sy Mai; Eyad H. Abed


arXiv: Optimization and Control | 2016

Linear convergence in directed optimization with row-stochastic matrices

Chenguang Xi; Van Sy Mai; Eyad H. Abed; Usman A. Khan

Collaboration


Dive into the Van Sy Mai's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lotfi Benmohamed

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge