Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Roula Nassif is active.

Publication


Featured researches published by Roula Nassif.


IEEE Transactions on Signal Processing | 2016

Multitask Diffusion Adaptation Over Asynchronous Networks

Roula Nassif; Cédric Richard; André Ferrari; Ali H. Sayed

The multitask diffusion LMS is an efficient strategy to simultaneously infer, in a collaborative manner, multiple parameter vectors. Existing works on multitask problems assume that all agents respond to data synchronously. In several applications, agents may not be able to act synchronously because networks can be subject to several sources of uncertainties such as changing topology, random link failures, or agents turning on and off for energy conservation. In this paper, we describe a model for the solution of multitask problems over asynchronous networks and carry out a detailed mean and mean-square error analysis. Results show that sufficiently small step-sizes can still ensure both stability and performance. Simulations and illustrative examples are provided to verify the theoretical findings.


IEEE Transactions on Signal Processing | 2016

Proximal Multitask Learning Over Networks With Sparsity-Inducing Coregularization

Roula Nassif; Cédric Richard; André Ferrari; Ali H. Sayed

In this work, we consider multitask learning problems where clusters of nodes are interested in estimating their own parameter vector. Cooperation among clusters is beneficial when the optimal models of adjacent clusters have a good number of similar entries. We propose a fully distributed algorithm for solving this problem. The approach relies on minimizing a global mean-square error criterion regularized by nondifferentiable terms to promote cooperation among neighboring clusters. A general diffusion forward-backward splitting strategy is introduced. Then, it is specialized to the case of sparsity promoting regularizers. A closed-form expression for the proximal operator of a weighted sum of ℓ1-norms is derived to achieve higher efficiency. We also provide conditions on the step-sizes that ensure convergence of the algorithm in the mean and mean-square error sense. Simulations are conducted to illustrate the effectiveness of the strategy.


international conference on acoustics, speech, and signal processing | 2016

Diffusion LMS over multitask networks with noisy links

Roula Nassif; Cédric Richard; Jie Chen; André Ferrari; Ali H. Sayed

Diffusion LMS is an efficient strategy for solving distributed optimization problems with cooperating agents. In some applications, the optimum parameter vectors may not be the same for all agents. Moreover, agents usually exchange information through noisy communication links. In this work, we analyze the theoretical performance of the single-task diffusion LMS when it is run, intentionally or unintentionally, in a multitask environment in the presence of noisy links. To reduce the impact of these nuisance factors, we introduce an improved strategy that allows the agents to promote or reduce exchanges of information with their neighbors.


IEEE Transactions on Signal Processing | 2017

Diffusion LMS for Multitask Problems With Local Linear Equality Constraints

Roula Nassif; Cédric Richard; André Ferrari; Ali H. Sayed

We consider distributed multitask learning problems over a network of agents where each agent is interested in estimating its own parameter vector, also called task, and where the tasks at neighboring agents are related according to a set of linear equality constraints. Each agent possesses its own convex cost function of its parameter vector and a set of linear equality constraints involving its own parameter vector and the parameter vectors of its neighboring agents. We propose an adaptive stochastic algorithm based on the projection gradient method and diffusion strategies in order to allow the network to optimize the individual costs subject to all constraints. Although the derivation is carried out for linear equality constraints, the technique can be applied to other forms of convex constraints. We conduct a detailed mean-square-error analysis of the proposed algorithm and derive closed-form expressions to predict its learning behavior. We provide simulations to illustrate the theoretical findings. Finally, the algorithm is employed for solving two problems in a distributed manner: A minimum-cost flow problem over a network and a space–time varying field reconstruction problem.


international conference on acoustics, speech, and signal processing | 2015

Multitask diffusion LMS with sparsity-based regularization

Roula Nassif; Cédric Richard; André Ferrari; Ali H. Sayed

In this work, a diffusion-type algorithm is proposed to solve multitask estimation problems where each cluster of nodes is interested in estimating its own optimum parameter vector in a distributed manner. The approach relies on minimizing a global mean-square error criterion regularized by a term that promotes piecewise constant transitions in the parameter vector entries estimated by neighboring clusters. We provide some results on the mean and mean-square-error convergence. Simulations are conducted to illustrate the effectiveness of the strategy.


asilomar conference on signals, systems and computers | 2014

Performance analysis of multitask diffusion adaptation over asynchronous networks

Roula Nassif; Cédric Richard; André Ferrari; Ali H. Sayed

The multitask diffusion LMS algorithm is an efficient strategy to address distributed estimation problems that are multitask-oriented in the sense that the optimum parameter vector may not be the same for every cluster of nodes. In this work, we explore the adaptation and learning behavior of the algorithm under asynchronous conditions when networks are subject to various sources of uncertainties, including random link failures and agents turning on and off randomly. We conduct a mean-square-error performance analysis and examine how asynchronous events interfere with the learning performance.


asilomar conference on signals, systems and computers | 2016

Distributed learning over multitask networks with linearly related tasks

Roula Nassif; Cédric Richard; André Ferrari; Ali H. Sayed

In this work, we consider distributed adaptive learning over multitask mean-square-error (MSE) networks where each agent is interested in estimating its own parameter vector, also called task, and where the tasks at neighboring agents are related according to a set of linear equality constraints. We assume that each agent knows its own cost function of its vector and the set of constraints involving its vector. In order to solve the multitask problem and to optimize the individual costs subject to all constraints, a projection based diffusion LMS approach is derived and studied. Simulation results illustrate the efficiency of the strategy.


international workshop on signal processing advances in wireless communications | 2018

Distributed Inference Over Multitask Graphs Under Smoothness

Roula Nassif; Stefan Vlaski; Ali H. Sayed


asilomar conference on signals, systems and computers | 2017

A graph diffusion LMS strategy for adaptive graph signal processing

Roula Nassif; Cédric Richard; Jie Chen; Ali H. Sayed


international conference on acoustics, speech, and signal processing | 2018

Distributed Diffusion Adaptation Over Graph Signals.

Roula Nassif; Cédric Richard; Jie Chen; Ali H. Sayed

Collaboration


Dive into the Roula Nassif's collaboration.

Top Co-Authors

Avatar

Ali H. Sayed

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Cédric Richard

University of Nice Sophia Antipolis

View shared research outputs
Top Co-Authors

Avatar

André Ferrari

University of Nice Sophia Antipolis

View shared research outputs
Top Co-Authors

Avatar

Stefan Vlaski

University of California

View shared research outputs
Top Co-Authors

Avatar

Jie Chen

Northwestern Polytechnical University

View shared research outputs
Top Co-Authors

Avatar

Fei Hua

Northwestern Polytechnical University

View shared research outputs
Top Co-Authors

Avatar

Haiyan Wang

Northwestern Polytechnical University

View shared research outputs
Top Co-Authors

Avatar

Jianguo Huang

Northwestern Polytechnical University

View shared research outputs
Top Co-Authors

Avatar

Hermina Petric Maretic

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Pascal Frossard

École Polytechnique Fédérale de Lausanne

View shared research outputs
Researchain Logo
Decentralizing Knowledge