IEEE Network | 2019

Secure Distributed On-Device Learning Networks with Byzantine Adversaries

 
 
 
 

Abstract


Privacy concerns exist when the central server has copies of datasets. Hence, there is a paradigm shift for learning networks to change from centralized in-cloud learning to distributed on-device learning. Benefitting from parallel computing, on-device learning networks have a lower bandwidth requirement than in-cloud learning networks. Moreover, on-device learning networks also have several desirable characteristics such as privacy preserving and flexibility. However, on-device learning networks are vulnerable to the malfunctioning terminals across the networks. The worst-case malfunctioning terminals are the Byzantine adversaries, that can perform arbitrary harmful operations to compromise the learned model based on the full knowledge of the networks. Hence, the design of secure learning algorithms becomes an emerging topic in the on-device learning networks with Byzantine adversaries. In this article, we present a comprehensive overview of the prevalent secure learning algorithms for the two promising on-device learning networks: Federated-Learning networks and decentralized-learning networks. We also review several future research directions in the Federated- Learning and decentralized-learning networks.

Volume 33
Pages 180-187
DOI 10.1109/MNET.2019.1900025
Language English
Journal IEEE Network

Full Text