IEEE Transactions on Neural Networks and Learning Systems | 2019

A Fused CP Factorization Method for Incomplete Tensors

 
 
 
 
 

Abstract


Low-rank tensor completion methods have been advanced recently for modeling sparsely observed data with a multimode structure. However, low-rank priors may fail to interpret the model factors of general tensor objects. The most common method to address this drawback is to use regularizations together with the low-rank priors. However, due to the complex nature and diverse characteristics of real-world multiway data, the use of a single or a few regularizations remains far from efficient, and there are limited systematic experimental reports on the advantages of these regularizations for tensor completion. To fill these gaps, we propose a modified CP tensor factorization framework that fuses the <inline-formula> <tex-math notation= LaTeX >$l_{2}$ </tex-math></inline-formula> norm constraint, sparseness (<inline-formula> <tex-math notation= LaTeX >$l_{1}$ </tex-math></inline-formula> norm), manifold, and smooth information simultaneously. The factorization problem is addressed through a combination of Nesterov’s optimal gradient descent method and block coordinate descent. Here, we construct a smooth approximation to the <inline-formula> <tex-math notation= LaTeX >$l_{1}$ </tex-math></inline-formula> norm and TV norm regularizations, and then, the tensor factor is updated using the projected gradient method, where the step size is determined by the Lipschitz constant. Extensive experiments on simulation data, visual data completion, intelligent transportation systems, and GPS data of user involvement are conducted, and the efficiency of our method is confirmed by the results. Moreover, the obtained results reveal the characteristics of these commonly used regularizations for tensor completion in a certain sense and give experimental guidance concerning how to use them.

Volume 30
Pages 751-764
DOI 10.1109/TNNLS.2018.2851612
Language English
Journal IEEE Transactions on Neural Networks and Learning Systems

Full Text