IEEE Transactions on Signal Processing | 2021

Randomized Neural Networks Based Decentralized Multi-Task Learning via Hybrid Multi-Block ADMM

 
 
 

Abstract


In multi-task learning (MTL), related tasks learn jointly to improve generalization performance. To exploit the high learning speed of feed-forward neural networks (FNN), we apply the randomized single-hidden layer FNN (RSF) to the MTL problem, where the output weights of RSFs for all the tasks are learned collaboratively. We first present the RSF based MTL problem in the centralized setting, which is solved by the proposed MTL-RSF algorithm. Due to the fact that many data sets of different tasks are geo-distributed, decentralized machine learning is studied. We formulate the decentralized MTL problem based on RSF as majorized multi-block optimization with coupled bi-convex objective functions. To solve the problem, we propose the DMTL-RSF algorithm, which is a hybrid Jacobian and Gauss-Seidel Proximal multi-block alternating direction method of multipliers (ADMM). Further, to reduce the computation load of DMTL-RSF, DMTL-RSF with first-order approximation (FO-DMTL-RSF) is presented. Theoretical analysis shows that the convergence to the stationary point of proposed decentralized algorithms can be guaranteed conditionally. Through simulations, we demonstrate the convergence of presented algorithms, and also show that they can outperform existing MTL methods. Moreover, by adjusting the dimension of hidden feature space, there exists a trade-off between communication load and learning accuracy for DMTL-RSF.

Volume 69
Pages 2844-2857
DOI 10.1109/TSP.2021.3078625
Language English
Journal IEEE Transactions on Signal Processing

Full Text