Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Rasul Tutunov is active.

Publication


Featured researches published by Rasul Tutunov.


conference on decision and control | 2014

On convergence rate of Accelerated Dual Descent Algorithm

Rasul Tutunov; Michael Zargham; Ali Jadbabaie

We expand and refine the convergence rate analysis of the Accelerated Dual Descent (ADD) Algorithm, a fast distributed solution to the convex network flow optimization problem. ADD uses local information to compute an approximate Newton direction for the dual problem. It has been previously shown that the quality of the approximation depends on a network invariant related to the expansion properties of the underlying graph. The main result of this work is to characterize this information spreading parameter in terms of the network structure and properties of the primal objective. The three phase convergence Theorem for ADD is revisited and bounds which explicitly depend on the network structure are presented. Additionally, an upper bound on the number of iterations required to reach the terminal phase is proven. We explore what type of graphs work best with ADD by examining our characterization of the information spreading coefficient in the context of commonly studied network structures.


conference on decision and control | 2016

An exact distributed newton method for reinforcement learning

Rasul Tutunov; Haitham Bou Ammar; Ali Jadbabaie

In this paper, we propose a distributed second-order method for reinforcement learning. Our approach is the fastest in literature so-far as it outperforms state-of-the-art methods, including ADMM, by significant margins. We achieve this by exploiting the sparsity pattern of the dual Hessian and transforming the problem of computing the Newton direction to one of solving a sequence of symmetric diagonally dominant system of equations. We validate the above claim both theoretically and empirically. On the theoretical side, we prove that similar to exact Newton, our algorithm exhibits super-linear convergence within a neighborhood of the optimal solution. Empirically, we demonstrate the superiority of this new method on a set of benchmark reinforcement learning tasks.


american control conference | 2013

Identifiability of links and nodes in multi-agent systems under the agreement protocol

Mohammad Amin Rahimian; Amir Ajorlou; Rasul Tutunov; Amir G. Aghdam

In this paper, the question of identifying various links and nodes in the network based on the observed agent dynamics is addressed. The focus is on a multi-agent network that evolves under the linear agreement protocol. The results help determine if various components of the network are distinguishable from each other, based on the choice of initial conditions and the observed output responses. Identifiability of links and nodes are studied separately, and in each case, the role of symmetries in the network information flow graph are analyzed. Examples are provided to elucidate the results.


conference on decision and control | 2015

Fast, accurate second order methods for network optimization

Rasul Tutunov; Haitham Bou Ammar; Ali Jadbabaie

Dual descent methods are commonly used to solve network flow optimization problems, since their implementation can be distributed over the network. These algorithms, however, often exhibit slow convergence rates. Approximate Newton methods which compute descent directions locally have been proposed as alternatives to accelerate the convergence rates of conventional dual descent. The effectiveness of these methods, is limited by the accuracy of such approximations. In this paper, we propose an efficient and accurate distributed second order method for network flow problems. Our approach utilizes the sparsity pattern of the dual Hessian to approximate the the Newton direction using a novel distributed solver for symmetric diagonally dominant linear equations. We analyze the properties of the proposed algorithm and show that superlinear convergence within a neighborhood of the optimal value. We finally demonstrate the effectiveness of the approach in a set of experiments.


international conference on machine learning | 2015

Safe Policy Search for Lifelong Reinforcement Learning with Sublinear Regret

Haitham Bou Ammar; Rasul Tutunov; Eric Eaton


arXiv: Distributed, Parallel, and Cluster Computing | 2015

A Fast Distributed Solver for Symmetric Diagonally Dominant Linear Equations.

Rasul Tutunov; Haitham Bou-Ammar; Ali Jadbabaie


arXiv: Artificial Intelligence | 2017

Regularised Deep Reinforcement Learning with Guaranteed Convergence

Felix Leibfried; Rasul Tutunov; Jordi Grau-Moya; Haitham Bou-Ammar


arXiv: Optimization and Control | 2015

Distributed SDDM Solvers: Theory & Applications

Rasul Tutunov; Haitham Bou-Ammar; Ali Jadbabaie


conference on decision and control | 2017

Distributed lifelong reinforcement learning with sub-linear regret

Rasul Tutunov; Julia El-Zini; Haitham Bou-Ammar; Ali Jadbabaie


arXiv: Distributed, Parallel, and Cluster Computing | 2016

A Distributed Newton Method for Large Scale Consensus Optimization.

Rasul Tutunov; Haitham Bou-Ammar; Ali Jadbabaie

Collaboration


Dive into the Rasul Tutunov's collaboration.

Top Co-Authors

Avatar

Ali Jadbabaie

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Haitham Bou Ammar

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Eric Eaton

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michael Zargham

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Haitham Bou Ammar

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Julia El-Zini

American University of Beirut

View shared research outputs
Researchain Logo
Decentralizing Knowledge