Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Alec Koppel is active.

Publication


Featured researches published by Alec Koppel.


IEEE Transactions on Signal Processing | 2015

A Saddle Point Algorithm for Networked Online Convex Optimization

Alec Koppel; Felicia Y. Jakubiec; Alejandro Ribeiro

An algorithm to learn optimal actions in convex distributed online problems is developed. Learning is online because cost functions are revealed sequentially and distributed because they are revealed to agents of a network that can exchange information with neighboring nodes only. Learning is measured in terms of the global network regret, which is defined here as the accumulated loss of causal prediction with respect to a centralized clairvoyant agent to which the information of all times and agents is revealed at the initial time. A variant of the Arrow–Hurwicz saddle point algorithm is proposed to control the growth of global network regret. This algorithm uses Lagrange multipliers to penalize the discrepancies between agents and leads to an implementation that relies on local operations and exchange of variables between neighbors. We show that decisions made with this saddle point algorithm lead to regret whose order is not larger than


IEEE Transactions on Signal Processing | 2016

A Class of Prediction-Correction Methods for Time-Varying Convex Optimization

Andrea Simonetto; Aryan Mokhtari; Alec Koppel; Geert Leus; Alejandro Ribeiro

O(\sqrt{T})


international conference on acoustics, speech, and signal processing | 2014

A saddle point algorithm for networked online convex optimization

Alec Koppel; Felicia Y. Jakubiec; Alejandro Ribeiro

, where


intelligent robots and systems | 2015

D4L: Decentralized dynamic discriminative dictionary learning

Alec Koppel; Garrett Warnell; Ethan Stump; Alejandro Ribeiro

T


international conference on acoustics, speech, and signal processing | 2016

Proximity without consensus in online multi-agent optimization

Alec Koppel; Brian M. Sadler; Alejandro Ribeiro

is the total operating time. Numerical behavior is illustrated for the particular case of distributed recursive least squares. An application to computer network security in which service providers cooperate to detect the signature of malicious users is developed to illustrate the practical value of the proposed algorithm.


IEEE Transactions on Automatic Control | 2017

Decentralized Prediction-Correction Methods for Networked Time-Varying Convex Optimization

Andrea Simonetto; Alec Koppel; Aryan Mokhtari; Geert Leus; Alejandro Ribeiro

This paper considers unconstrained convex optimization problems with time-varying objective functions. We propose algorithms with a discrete time-sampling scheme to find and track the solution trajectory based on prediction and correction steps, while sampling the problem data at a constant rate of 1/h, where h is the sampling period. The prediction step is derived by analyzing the iso-residual dynamics of the optimality conditions. The correction step adjusts for the distance between the current prediction and the optimizer at each time step, and consists either of one or multiple gradient steps or Newton steps, which respectively correspond to the gradient trajectory tracking (GTT) or Newton trajectory tracking (NTT) algorithms. Under suitable conditions, we establish that the asymptotic error incurred by both proposed methods behaves as O(h2), and in some cases as O(h4), which outperforms the state-of-the-art error bound of O(h) for correction-only methods in the gradient-correction step. Moreover, when the characteristics of the objective function variation are not available, we propose approximate gradient and Newton tracking algorithms (AGT and ANT, respectively) that still attain these asymptotical error bounds. Numerical simulations demonstrate the practical utility of the proposed methods and that they improve upon existing techniques by several orders of magnitude.


ieee global conference on signal and information processing | 2015

Target tracking with dynamic convex optimization

Alec Koppel; Andrea Simonetto; Aryan Mokhtari; Geert Leus; Alejandro Ribeiro

An algorithm to learn optimal actions in convex distributed online problems is developed. Learning is online because cost functions are revealed sequentially and distributed because they are revealed to agents of a network that can exchange information with neighboring nodes only. Learning is measured in terms of the global network regret, which is defined here as the accumulated loss of causal prediction with respect to a centralized clairvoyant agent to which the information of all times and agents is revealed at the initial time. A variant of the Arrow-Hurwicz saddle point algorithm is proposed to control the growth of global network regret. This algorithm uses Lagrange multipliers to penalize the discrepancies between agents and leads to an implementation that relies on local operations and exchange of variables between neighbors. We show that decisions made with this saddle point algorithm lead to regret whose order is not larger than O(√T), where T is the total operating time. Numerical behavior is illustrated for the particular case of distributed recursive least squares. An application to computer network security in which service providers cooperate to detect the signature of malicious users is developed to illustrate the practical value of the proposed algorithm.


asilomar conference on signals, systems and computers | 2015

Prediction-correction methods for time-varying convex optimization

Andrea Simonetto; Alec Koppel; Aryan Mokhtari; Geert Leus; Alejandro Ribeiro

We consider discriminative dictionary learning in a distributed online setting, where a network of agents aims to learn, from sequential observations, statistical model parameters jointly with data-driven signal representations. We formulate this problem as a distributed stochastic program with a nonconvex objective that quantifies the merit of the choice of model parameters and dictionary. We consider the use of a block variant of the Arrow–Hurwicz saddle point algorithm to solve this problem, which exploits factorization properties of the Lagrangian to yield a protocol in that only requires exchange of model information among neighboring nodes. We show that decisions made with this saddle point algorithm asymptotically achieve a first-order stationarity condition on average. The learning rate depends on the signal source, network structure, and discriminative task. We illustrate the algorithm performance for solving a large-scale image classification task on a network of interconnected servers and observe that practical performance is comparable to a centralized approach. We further apply this method to the problem of a robotic team seeking to autonomously navigate in an unknown environment by predicting unexpected maneuvers, demonstrating the proposed algorithms utility in a field setting.


intelligent robots and systems | 2016

Online learning for characterizing unknown environments in ground robotic vehicle models

Alec Koppel; Jonathan Fink; Garrett Warnell; Ethan Stump; Alejandro Ribeiro

We consider stochastic optimization problems in multi-agent settings, where a network of agents aims to learn decision variables which are optimal in terms of a global objective, while giving preference to locally and sequentially observed information. To do so, we formulate a problem where each agent minimizes a global objective while enforcing network proximity constraints, which includes consensus optimization as a special case. We propose a stochastic variant of the saddle point algorithm proposed by Arrow and Hurwicz to solve it, which yields a decentralized algorithm that is shown to asymptotically converge to a primal-dual optimal pair of the problem in expectation when a diminishing algorithm step-size is chosen. Moreover, the algorithm converges linearly to a neighborhood when a constant step-size is chosen. We apply this method to the problem of sequentially estimating a correlated random field in a sensor network, which corroborates these performance guarantees.


ieee international workshop on computational advances in multi sensor adaptive processing | 2015

A decentralized prediction-correction method for networked time-varying convex optimization

Andrea Simonetto; Aryan Mokhtari; Alec Koppel; Geert Leus; Alejandro Ribeiro

We study networked unconstrained convex optimization problems where the objective function changes continuously in time. We propose a decentralized algorithm (DePCoT) with a discrete time-sampling scheme to find and track the solution trajectory based on prediction and gradient-based correction steps, while sampling the problem data at a constant sampling period h. Under suitable conditions and for limited sampling periods, we establish that the asymptotic error bound behaves as O(h2), which outperforms the state of the art existing error bound of O(h) for correction-only methods. The key contributions are the prediction step and a decentralized method to approximate the inverse of the Hessian of the cost function in a decentralized way, which yields quantifiable trade-offs between communication and accuracy.

Collaboration


Dive into the Alec Koppel's collaboration.

Top Co-Authors

Avatar

Alejandro Ribeiro

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Aryan Mokhtari

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Geert Leus

Delft University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Cédric Richard

University of Nice Sophia Antipolis

View shared research outputs
Top Co-Authors

Avatar

Amrit Singh Bedi

Indian Institute of Technology Kanpur

View shared research outputs
Top Co-Authors

Avatar

Ketan Rajawat

Indian Institute of Technology Kanpur

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge