Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Angelia Nedic is active.

Publication


Featured researches published by Angelia Nedic.


IEEE Transactions on Automatic Control | 2009

Distributed Subgradient Methods for Multi-Agent Optimization

Angelia Nedic; Asuman E. Ozdaglar

We study a distributed computation model for optimizing a sum of convex objective functions corresponding to multiple agents. For solving this (not necessarily smooth) optimization problem, we consider a subgradient method that is distributed among the agents. The method involves every agent minimizing his/her own objective function while exchanging information locally with other agents in the network over a time-varying topology. We provide convergence results and convergence rate estimates for the subgradient method. Our convergence rate results explicitly characterize the tradeoff between a desired accuracy of the generated approximate optimal solutions and the number of iterations needed to achieve the accuracy.


Siam Journal on Optimization | 2001

Incremental Subgradient Methods for Nondifferentiable Optimization

Angelia Nedic; Dimitri P. Bertsekas

We consider a class of subgradient methods for minimizing a convex function that consists of the sum of a large number of component functions. This type of minimization arises in a dual context from Lagrangian relaxation of the coupling constraints of large scale separable problems. The idea is to perform the subgradient iteration incrementally, by sequentially taking steps along the subgradients of the component functions, with intermediate adjustment of the variables after processing each component function. This incremental approach has been very successful in solving large differentiable least squares problems, such as those arising in the training of neural networks, and it has resulted in a much better practical rate of convergence than the steepest descent method. In this paper, we establish the convergence properties of a number of variants of incremental subgradient methods, including some that are stochastic. Based on the analysis and computational experiments, the methods appear very promising and effective for important classes of large problems. A particularly interesting discovery is that by randomizing the order of selection of component functions for iteration, the convergence rate is substantially improved.


conference on decision and control | 2008

On distributed averaging algorithms and quantization effects

Angelia Nedic; Alex Olshevsky; Asuman E. Ozdaglar; John N. Tsitsiklis

We consider distributed iterative algorithms for the averaging problem over time-varying topologies. Our focus is on the convergence time of such algorithms when complete (unquantized) information is available, and on the degradation of performance when only quantized information is available. We study a large and natural class of averaging algorithms, which includes the vast majority of algorithms proposed to date, and provide tight polynomial bounds on their convergence time. We also describe an algorithm within this class whose convergence time is the best among currently available averaging algorithms for time-varying topologies. We then propose and analyze distributed averaging algorithms under the additional constraint that agents can only store and communicate quantized information, so that they can only converge to the average of the initial values of the agents within some error. We establish bounds on the error and tight bounds on the convergence time, as a function of the number of quantization levels.


Siam Journal on Optimization | 2008

Approximate Primal Solutions and Rate Analysis for Dual Subgradient Methods

Angelia Nedic; Asuman E. Ozdaglar

In this paper, we study methods for generating approximate primal solutions as a byproduct of subgradient methods applied to the Lagrangian dual of a primal convex (possibly nondifferentiable) constrained optimization problem. Our work is motivated by constrained primal problems with a favorable dual problem structure that leads to efficient implementation of dual subgradient methods, such as the recent resource allocation problems in large-scale networks. For such problems, we propose and analyze dual subgradient methods that use averaging schemes to generate approximate primal optimal solutions. These algorithms use a constant stepsize in view of its simplicity and practical significance. We provide estimates on the primal infeasibility and primal suboptimality of the generated approximate primal solutions. These estimates are given per iteration, thus providing a basis for analyzing the trade-offs between the desired level of error and the selection of the stepsize value. Our analysis relies on the Slater condition and the inherited boundedness properties of the dual problem under this condition. It also relies on the boundedness of subgradients, which is ensured by assuming the compactness of the constraint set.


conference on decision and control | 2013

Distributed optimization over time-varying directed graphs

Angelia Nedic; Alex Olshevsky

We consider distributed optimization by a collection of nodes, each having access to its own convex function, whose collective goal is to minimize the sum of the functions. The communications between nodes are described by a time-varying sequence of directed graphs, which is uniformly strongly connected. For such communications, assuming that every node knows its out-degree, we develop a broadcast-based algorithm, termed the subgradient-push, which steers every node to an optimal value under a standard assumption of subgradient boundedness. The subgradient-push requires no knowledge of either the number of agents or the graph sequence to implement. Our analysis shows that the subgradient-push algorithm converges at a rate of O (ln t/√t), where the constant depends on the initial values at the nodes, the subgradient norms, and, more interestingly, on both the consensus speed and the imbalances of influence among the nodes.


IEEE Journal of Selected Topics in Signal Processing | 2011

Distributed Asynchronous Constrained Stochastic Optimization

Kunal Srivastava; Angelia Nedic

In this paper, we study two problems which often occur in various applications arising in wireless sensor networks. These are the problem of reaching an agreement on the value of local variables in a network of computational agents and the problem of cooperative solution to a convex optimization problem, where the objective function is the aggregate sum of local convex objective functions. We incorporate the presence of a random communication graph between the agents in our model as a more realistic abstraction of the gossip and broadcast communication protocols of a wireless network. An added ingredient is the presence of local constraint sets to which the local variables of each agent is constrained. Our model allows for the objective functions to be nondifferentiable and accommodates the presence of noisy communication links and subgradient errors. For the consensus problem we provide a diminishing step size algorithm which guarantees asymptotic convergence. The distributed optimization algorithm uses two diminishing step size sequences to account for communication noise and subgradient errors. We establish conditions on these step sizes under which we can achieve the dual task of reaching consensus and convergence to the optimal set with probability one. In both cases we consider the constant step size behavior of the algorithm and establish asymptotic error bounds.


Siam Journal on Optimization | 2009

Incremental Stochastic Subgradient Algorithms for Convex Optimization

S. Sundhar Ram; Angelia Nedic; Venugopal V. Veeravalli

This paper studies the effect of stochastic errors on two constrained incremental subgradient algorithms. The incremental subgradient algorithms are viewed as decentralized network optimization algorithms as applied to minimize a sum of functions, when each component function is known only to a particular agent of a distributed network. First, the standard cyclic incremental subgradient algorithm is studied. In this, the agents form a ring structure and pass the iterate in a cycle. When there are stochastic errors in the subgradient evaluations, sufficient conditions on the moments of the stochastic errors are obtained that guarantee almost sure convergence when a diminishing step-size is used. In addition, almost sure bounds on the algorithms performance with a constant step-size are also obtained. Next, the Markov randomized incremental subgradient method is studied. This is a noncyclic version of the incremental algorithm where the sequence of computing agents is modeled as a time nonhomogeneous Markov chain. Such a model is appropriate for mobile networks, as the network topology changes across time in these networks. Convergence results and error bounds for the Markov randomized method in the presence of stochastic errors for diminishing and constant step-sizes are obtained.


Archive | 2001

Convergence Rate of Incremental Subgradient Algorithms

Angelia Nedic; Dimitri P. Bertsekas

We consider a class of subgradient methods for minimizing a convex function that consists of the sum of a large number of component functions. This type of minimization arises in a dual context from Lagrangian relaxation of the coupling constraints of large scale separable problems. The idea is to perform the subgradient iteration incrementally, by sequentially taking steps along the subgradients of the component functions, with intermediate adjustment of the variables after processing each component function. This incremental approach has been very successful in solving large differentiable least squares problems, such as those arising in the training of neural networks, and it has resulted in a much better practical rate of convergence than the steepest descent method.


Discrete Event Dynamic Systems | 2003

Least Squares Policy Evaluation Algorithms with Linear Function Approximation

Angelia Nedic; Dimitri P. Bertsekas

We consider policy evaluation algorithms within the context of infinite-horizon dynamic programming problems with discounted cost. We focus on discrete-time dynamic systems with a large number of states, and we discuss two methods, which use simulation, temporal differences, and linear cost function approximation. The first method is a new gradient-like algorithm involving least-squares subproblems and a diminishing stepsize, which is based on the λ-policy iteration method of Bertsekas and Ioffe. The second method is the LSTD(λ) algorithm recently proposed by Boyan, which for λ=0 coincides with the linear least-squares temporal-difference algorithm of Bradtke and Barto. At present, there is only a convergence result by Bradtke and Barto for the LSTD(0) algorithm. Here, we strengthen this result by showing the convergence of LSTD(λ), with probability 1, for every λ ∈ [0, 1].


IEEE Transactions on Automatic Control | 2011

Asynchronous Broadcast-Based Convex Optimization Over a Network

Angelia Nedic

We consider a distributed multi-agent network system where each agent has its own convex objective function, which can be evaluated with stochastic errors. The problem consists of minimizing the sum of the agent functions over a commonly known constraint set, but without a central coordinator and without agents sharing the explicit form of their objectives. We propose an asynchronous broadcast-based algorithm where the communications over the network are subject to random link failures. We investigate the convergence properties of the algorithm for a diminishing (random) stepsize and a constant stepsize, where each agent chooses its own stepsize independently of the other agents. Under some standard conditions on the gradient errors, we establish almost sure convergence of the method to an optimal point for diminishing stepsize. For constant stepsize, we establish some error bounds on the expected distance from the optimal point and the expected function value. We also provide numerical results.

Collaboration


Dive into the Angelia Nedic's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Uday V. Shanbhag

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar

Asuman E. Ozdaglar

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Behrouz Touri

University of Colorado Boulder

View shared research outputs
Top Co-Authors

Avatar

Ji Liu

Stony Brook University

View shared research outputs
Top Co-Authors

Avatar

Dimitri P. Bertsekas

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Anna Scaglione

Arizona State University

View shared research outputs
Top Co-Authors

Avatar

Alexander Gasnikov

Moscow Institute of Physics and Technology

View shared research outputs
Top Co-Authors

Avatar

Kobi Cohen

Ben-Gurion University of the Negev

View shared research outputs
Top Co-Authors

Avatar

Hoi-To Wai

Arizona State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge