2021 Asia-Pacific Conference on Communications Technology and Computer Science (ACCTCS) | 2021

Federation learning optimization using distillation

 
 
 

Abstract


Federated learning is a special type of distributed machine learning that enables a large number of edge computing devices to train models collaboratively without sharing any private data. This method of decentralized training model data is always not local, which is private and safe. Provided a guarantee. However, because federated learning faces the challenges of heterogeneous problems: 1) heterogeneous models among devices; 2) differences in real data, which do not obey independent and identical distribution, resulting in poor performance of traditional federated learning algorithms. To solve the above problems, a distributed training method based on knowledge distillation is proposed. By introducing a personalized model on each device side, the personalized model is used to improve the performance of the global model on the device side, thereby improving the ability of the global model. The improvement of the performance of the local model benefits from the effect of knowledge distillation, which can guide the improvement of the global model by transferring “dark knowledge” between heterogeneous networks. Experimental results show that this method can significantly improve the accuracy of classification tasks, and at the same time meet the needs of heterogeneous users.

Volume None
Pages 25-28
DOI 10.1109/ACCTCS52002.2021.00013
Language English
Journal 2021 Asia-Pacific Conference on Communications Technology and Computer Science (ACCTCS)

Full Text