IEEE Transactions on Neural Networks and Learning Systems | 2021

Convergence Analysis of Single Latent Factor-Dependent, Nonnegative, and Multiplicative Update-Based Nonnegative Latent Factor Models

 
 
 

Abstract


A single latent factor (LF)-dependent, nonnegative, and multiplicative update (SLF-NMU) learning algorithm is highly efficient in building a nonnegative LF (NLF) model defined on a high-dimensional and sparse (HiDS) matrix. However, convergence characteristics of such NLF models are never justified in theory. To address this issue, this study conducts rigorous convergence analysis for an SLF-NMU-based NLF model. The main idea is twofold: 1) proving that its learning objective keeps nonincreasing with its SLF-NMU-based learning rules via constructing specific auxiliary functions; and 2) proving that it converges to a stable equilibrium point with its SLF-NMU-based learning rules via analyzing the Karush–Kuhn–Tucker (KKT) conditions of its learning objective. Experimental results on ten HiDS matrices from real applications provide numerical evidence that indicates the correctness of the achieved proof.

Volume 32
Pages 1737-1749
DOI 10.1109/TNNLS.2020.2990990
Language English
Journal IEEE Transactions on Neural Networks and Learning Systems

Full Text