Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Zaid J. Towfic is active.

Publication


Featured researches published by Zaid J. Towfic.


IEEE Signal Processing Magazine | 2013

Diffusion strategies for adaptation and learning over networks: an examination of distributed strategies and network behavior

Ali H. Sayed; Sheng-Yuan Tu; Jianshu Chen; Xiaochuan Zhao; Zaid J. Towfic

Nature provides splendid examples of real-time learning and adaptation behavior that emerges from highly localized interactions among agents of limited capabilities. For example, schools of fish are remarkably apt at configuring their topologies almost instantly in the face of danger [1]: when a predator arrives, the entire school opens up to let the predator through and then coalesces again into a moving body to continue its schooling behavior. Likewise, in bee swarms, only a small fraction of the agents (about 5%) are informed, and these informed agents are able to guide the entire swarm of bees to their new hive [2]. It is an extraordinary property of biological networks that sophisticated behavior is able to emerge from simple interactions among lower-level agents [3].


IEEE Transactions on Signal Processing | 2014

Adaptive Penalty-Based Distributed Stochastic Convex Optimization

Zaid J. Towfic; Ali H. Sayed

In this work, we study the task of distributed optimization over a network of learners in which each learner possesses a convex cost function, a set of affine equality constraints, and a set of convex inequality constraints. We propose a fully distributed adaptive diffusion algorithm based on penalty methods that allows the network to cooperatively optimize the global cost function, which is defined as the sum of the individual costs over the network, subject to all constraints. We show that when small constant step-sizes are employed, the expected distance between the optimal solution vector and that obtained at each node in the network can be made arbitrarily small. Two distinguishing features of the proposed solution relative to other approaches is that the developed strategy does not require the use of projections and is able to track drifts in the location of the minimizer due to changes in the constraints or in the aggregate cost itself. The proposed strategy is able to cope with changing network topology, is robust to network disruptions, and does not require global information or rely on central processors.


IEEE Transactions on Signal Processing | 2015

Dictionary Learning Over Distributed Models

Jianshu Chen; Zaid J. Towfic; Ali H. Sayed

In this paper, we consider learning dictionary models over a network of agents, where each agent is only in charge of a portion of the dictionary elements. This formulation is relevant in Big Data scenarios where large dictionary models may be spread over different spatial locations and it is not feasible to aggregate all dictionaries in one location due to communication and privacy considerations. We first show that the dual function of the inference problem is an aggregation of individual cost functions associated with different agents, which can then be minimized efficiently by means of diffusion strategies. The collaborative inference step generates dual variables that are used by the agents to update their dictionaries without the need to share these dictionaries or even the coefficient models for the training data. This is a powerful property that leads to an effective distributed procedure for learning dictionaries over large networks (e.g., hundreds of agents in our experiments). Furthermore, the proposed learning strategy operates in an online manner and is able to respond to streaming data, where each data sample is presented to the network once.


international workshop on machine learning for signal processing | 2011

Collaborative learning of mixture models using diffusion adaptation

Zaid J. Towfic; Jianshu Chen; Ali H. Sayed

In large ad-hoc networks, classification tasks such as spam filtering, multi-camera surveillance, and advertising have been traditionally implemented in a centralized manner by means of fusion centers. These centers receive and process the information that is collected from across the network. In this paper, we develop a decentralized adaptive strategy for information processing and apply it to the task of estimating the parameters of a Gaussian-mixture-model (GMM). The proposed technique employs adaptive diffusion algorithms that enable adaptation, learning, and cooperation at local levels. The simulation results illustrate how the proposed technique outperforms non-collaborative learning and is competitive against centralized solutions.


IEEE Transactions on Signal Processing | 2015

Stability and Performance Limits of Adaptive Primal-Dual Networks

Zaid J. Towfic; Ali H. Sayed

This paper studies distributed primal-dual strategies for adaptation and learning over networks from streaming data. Two first-order methods are considered based on the Arrow-Hurwicz (AH) and augmented Lagrangian (AL) techniques. Several revealing results are discovered in relation to the performance and stability of these strategies when employed over adaptive networks. The conclusions establish that the advantages that these methods exhibit for deterministic optimization problems do not necessarily carry over to stochastic optimization problems. It is found that they have narrower stability ranges and worse steady-state mean-square-error performance than primal methods of the consensus and diffusion type. It is also found that the AH technique can become unstable under a partial observation model, while the other techniques are able to recover the unknown under this scenario. A method to enhance the performance of AL strategies is proposed by tying the selection of the step-size to their regularization parameter. It is shown that this method allows the AL algorithm to approach the performance of consensus and diffusion strategies but that it remains less stable than these other strategies.


Neurocomputing | 2013

On distributed online classification in the midst of concept drifts

Zaid J. Towfic; Jianshu Chen; Ali H. Sayed

In this work, we analyze the generalization ability of distributed online learning algorithms under stationary and non-stationary environments. We derive bounds for the excess-risk attained by each node in a connected network of learners and study the performance advantage that diffusion has over individual non-cooperative processing. We conduct extensive simulations to illustrate the results.


international conference on acoustics, speech, and signal processing | 2014

ONLINE DICTIONARY LEARNING OVER DISTRIBUTED MODELS

Jianshu Chen; Zaid J. Towfic; Ali H. Sayed

In this paper, we consider learning dictionary models over a network of agents, where each agent is only in charge of a portion of the dictionary elements. This formulation is relevant in big data scenarios where multiple large dictionary models may be spread over different spatial locations and it is not feasible to aggregate all dictionaries in one location due to communication and privacy considerations. We first show that the dual function of the inference problem is an aggregation of individual cost functions associated with different agents, which can then be minimized efficiently by means of diffusion strategies. The collaborative inference step generates local error measures that are used by the agents to update their dictionaries without the need to share these dictionaries or even the coefficient models for the training data. This is a useful property that leads to an efficient distributed procedure for learning dictionaries over large networks.


international symposium on circuits and systems | 2010

Sampling clock jitter estimation and compensation in ADC circuits

Zaid J. Towfic; Shang-Kee Ting; Ali H. Sayed

Clock timing jitters refer to random perturbations in the sampling time in analog-to-digital converters (ADCs). The perturbations are caused by circuit imperfections in the sampling clock. This paper analyzes the effect of sampling clock jitter on the acquired samples. The paper proposes two methods to estimate the jitter for superheterodyne receiver architectures and cognitive radio architectures at high sampling rates. The paper also proposes a method to compensate for the jitter. The methods are tested and validated via computer simulations and theoretical analysis.


international workshop on machine learning for signal processing | 2012

On the generalization ability of distributed online learners

Zaid J. Towfic; Jianshu Chen; Ali H. Sayed

We propose a fully-distributed stochastic-gradient strategy based on diffusion adaptation techniques. We show that, for strongly convex risk functions, the excess-risk at every node decays at the rate of O(1/Ni), where N is the number of learners and i is the iteration index. In this way, the distributed diffusion strategy, which relies only on local interactions, is able to achieve the same convergence rate as centralized strategies that have access to all data from the nodes at every iteration. We also show that every learner is able to improve its excess-risk in comparison to the non-cooperative mode of operation where each learner would operate independently of the other learners.


IEEE Transactions on Signal Processing | 2012

Clock Jitter Compensation in High-Rate ADC Circuits

Zaid J. Towfic; Shang-Kee Ting; Ali H. Sayed

Clock timing jitter refers to random perturbations in the sampling time in analog-to-digital converters (ADCs). The perturbations are caused by circuit imperfections in the sampling clock. This paper analyzes the effect of sampling clock jitter on the acquired samples in the midst of random noise. We propose low-complexity digital signal processing methods for estimating the jitter in real-time for direct downconversion receivers at high sampling rates. We also propose adaptive compensation methods for the jitter and analyze the performance of the proposed techniques in some detail as well as through simulations.

Collaboration


Dive into the Zaid J. Towfic's collaboration.

Top Co-Authors

Avatar

Ali H. Sayed

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Shang-Kee Ting

University of California

View shared research outputs
Top Co-Authors

Avatar

Sheng-Yuan Tu

University of California

View shared research outputs
Top Co-Authors

Avatar

Xiaochuan Zhao

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge