IEEE Access | 2021

A Multi-View Fusion Method via Tensor Learning and Gradient Descent for Image Features

 
 
 
 

Abstract


In many computer vision applications, one image can be represented by multiple heterogeneous features from different views, most of them commonly locate in high-dimensional space. These features can reflect different characteristics of one same object, they contain compatible and complementary information among each other. How to construct an uniform low-dimensional embedding features which represent useful information of multi-view features is still an important and urgent issue to be solved. Therefore, we propose a multi-view fusion method via tensor learning and gradient descent (MvF-TG) in this paper. MvF-TG reconstructs a lowdimensional mapping subspace of each object by utilizing its k nearest neighbors, which preserves the underlying neighborhood structure of the original local manifold. The new method can effectively exploit the spatial correlation information from the multi-view features by tensor learning. Furthermore, the method constructs a gradient descent optimization model to generate the better unified low dimensional embedding. The proposed method is compared with several single-view and multi-view dimensional reduction methods in these indicators of P, R, MAP and F-measure. In the retrieval experiments, the P values of the newmethod respectively are 86.80%, 52.00%, 68.56% and 78.80% on datasets of Corel1k, Corel5k, Corel10k and Holidays. In the classification experiments, the mean accuracies of it respectively are 47.94% and 87.58% on datasets of Caltech101 and Coil. These values are higher than those obtained by other comparison methods, various evaluations based on the applications of image classification and retrieval demonstrates the effectiveness of our proposed method on multi-view feature fusion dimension reduction.

Volume 9
Pages 79389-79399
DOI 10.1109/ACCESS.2021.3079499
Language English
Journal IEEE Access

Full Text