Bicheng Ying
University of California, Los Angeles
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Bicheng Ying.
IEEE Transactions on Information Theory | 2016
Bicheng Ying; Ali H. Sayed
This paper examines the learning mechanism of adaptive agents over weakly connected graphs and reveals an interesting behavior on how information flows through such topologies. The results clarify how asymmetries in the exchange of data can mask local information at certain agents and make them totally dependent on other agents. A leader-follower relationship develops with the performance of some agents being fully determined by the performance of other agents that are outside their domain of influence. This scenario can arise, for example, due to intruder attacks by malicious agents or as the result of failures by some critical links. The findings in this paper help explain why strong-connectivity of the network topology, adaptation of the combination weights, and clustering of agents are important ingredients to equalize the learning abilities of all agents against such disturbances. The results also clarify how weak-connectivity can be helpful in reducing the effect of outlier data on learning performance.
ieee transactions on signal and information processing over networks | 2017
Hawraa Salami; Bicheng Ying; Ali H. Sayed
In this paper, we study diffusion social learning over weakly connected graphs. We show that the asymmetric flow of information hinders the learning abilities of certain agents regardless of their local observations. Under some circumstances that we clarify in this paper, a scenario of total influence (or “mind-control”) arises where a set of influential agents ends up shaping the beliefs of noninfluential agents. We derive useful closed-form expressions that characterize this influence, and which can be used to motivate design problems to control it. We provide simulation examples to illustrate the results.
international conference on acoustics, speech, and signal processing | 2016
Bicheng Ying; Ali H. Sayed
This work examines the performance of stochastic sub-gradient learning strategies, for both cases of stand-alone and networked agents, under weaker conditions than usually considered in the literature. It is shown that these conditions are automatically satisfied by several important cases of interest, including support-vector machines and sparsity-inducing learning solutions. The analysis establishes that sub-gradient strategies can attain exponential convergence rates, as opposed to sub-linear rates, and that they can approach the optimal solution within O(p), for sufficiently small step-sizes, p. A realizable exponential-weighting procedure is proposed to smooth the intermediate iterates and to guarantee these desirable performance properties.
IEEE Transactions on Signal Processing | 2018
Chengcheng Wang; Yonggang Zhang; Bicheng Ying; Ali H. Sayed
This paper examines the mean-square error performance of diffusion stochastic algorithms under a generalized coordinate-descent scheme. In this setting, the adaptation step by each agent is limited to a random subset of the coordinates of its stochastic gradient vector. The selection of coordinates varies randomly from iteration to iteration and from agent to agent across the network. Such schemes are useful in reducing computational complexity at each iteration in power-intensive large data applications. They are also useful in modeling situations where some partial gradient information may be missing at random. Interestingly, the results show that the steady-state performance of the learning strategy is not always degraded, while the convergence rate suffers some degradation. The results provide yet another indication of the resilience and robustness of adaptive distributed strategies.
european signal processing conference | 2017
Kun Yuan; Bicheng Ying; Xiaochuan Zhao; Ali H. Sayed
This work develops a distributed optimization algorithm with guaranteed exact convergence for a broad class of left-stochastic combination policies. The resulting exact diffusion strategy is shown to have a wider stability range and superior convergence performance than the EXTRA consensus strategy. The exact diffusion solution is also applicable to non-symmetric left-stochastic combination matrices, while most earlier developments on exact consensus implementations are limited to doubly-stochastic matrices or right-stochastic matrices; these latter policies impose stringent constraints on the network topology. Stability and convergence results are noted, along with numerical simulations to illustrate the conclusions.
international conference on acoustics, speech, and signal processing | 2015
Bicheng Ying; Ali H. Sayed
In this paper, we examine the learning mechanism of adaptive agents over weakly-connected graphs and reveal an interesting behavior on how information flows through such topologies. The results clarify how asymmetries in the exchange of data can mask local information at certain agents and make them totally dependent on other agents. A leader-follower relationship develops with the performance of some agents being fully determined by other agents that can even be outside their immediate domain of influence. This scenario can arise, for example, from intruder attacks by malicious agents or from failures by some critical links. The findings in this work help explain why strong-connectivity of the network topology, adaptation of the combination weights, and clustering of agents are important ingredients to equalize the learning abilities of all agents against such disturbances. The results also clarify how weak-connectivity can be helpful in reducing the effect of outlier data on learning performance.
international workshop on machine learning for signal processing | 2016
Kun Yuan; Bicheng Ying; Stefan Vlaski; Ali H. Sayed
The minimization of empirical risks over finite sample sizes is an important problem in large-scale machine learning. A variety of algorithms has been proposed in the literature to alleviate the computational burden per iteration at the expense of convergence speed and accuracy. Many of these approaches can be interpreted as stochastic gradient descent algorithms, where data is sampled from particular empirical distributions. In this work, we leverage this interpretation and draw from recent results in the field of online adaptation to derive new tight performance expressions for empirical implementations of stochastic gradient descent, mini-batch gradient descent, and importance sampling. The expressions are exact to first order in the step-size parameter and are tighter than existing bounds. We further quantify the performance gained from employing mini-batch solutions, and propose an optimal importance sampling algorithm to optimize performance.
Signal Processing | 2018
Bicheng Ying; Ali H. Sayed
Abstract The analysis in Part I [1] revealed interesting properties for subgradient learning algorithms in the context of stochastic optimization. These algorithms are used when the risk functions are non-smooth or involve non-differentiable components. They have been long recognized as being slow converging methods. However, it was revealed in Part I [1] that the rate of convergence becomes linear for stochastic optimization problems, with the error iterate converging at an exponential rate αi to within an O ( μ ) − neighborhood of the optimizer, for some α ∈ (0, 1) and small step-size μ. The conclusion was established under weaker assumptions than the prior literature and, moreover, several important problems were shown to satisfy these weaker assumptions automatically. These results revealed that sub-gradient learning methods have more favorable behavior than originally thought. The results of Part I [1] were exclusive to single-agent adaptation. The purpose of current Part II is to examine the implications of these discoveries when a collection of networked agents employs subgradient learning as their cooperative mechanism. The analysis will show that, despite the coupled dynamics that arises in a networked scenario, the agents are still able to attain linear convergence in the stochastic case; they are also able to reach agreement within O(μ) of the optimizer.
international conference on acoustics, speech, and signal processing | 2017
Bicheng Ying; Ali H. Sayed
Using duality arguments from optimization theory, this work develops an effective distributed gradient boosting strategy for inference and classification by networked clusters of learners. By sharing local dual variables with their immediate neighbors through a diffusion learning protocol, the clusters are able to match the performance of centralized boosting solutions even when the individual clusters only have access to partial information about the feature space.
european signal processing conference | 2017
Chengcheng Wang; Yonggang Zhang; Bicheng Ying; Ali H. Sayed
This work examines the mean-square error performance of diffusion stochastic algorithms under a generalized coordinate-descent scheme. In this setting, the adaptation step by each agent is limited to a random subset of the coordinates of its stochastic gradient vector. The selection of which coordinates to use varies randomly from iteration to iteration and from agent to agent across the network. Such schemes are useful in reducing computational complexity in power-intensive large data applications. The results show that the steady-state performance of the learning strategy is not affected, while the convergence rate suffers some degradation. The results provide yet another indication of the resilience and robustness of adaptive distributed strategies.