Xiaobo Shen
Nanyang Technological University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Xiaobo Shen.
ACM Transactions on Intelligent Systems and Technology | 2018
Xiaobo Shen; Fumin Shen; Li Liu; Yun-Hao Yuan; Weiwei Liu; Quansen Sun
Hashing techniques have recently gained increasing research interest in multimedia studies. Most existing hashing methods only employ single features for hash code learning. Multiview data with each view corresponding to a type of feature generally provides more comprehensive information. How to efficiently integrate multiple views for learning compact hash codes still remains challenging. In this article, we propose a novel unsupervised hashing method, dubbed multiview discrete hashing (MvDH), by effectively exploring multiview data. Specifically, MvDH performs matrix factorization to generate the hash codes as the latent representations shared by multiple views, during which spectral clustering is performed simultaneously. The joint learning of hash codes and cluster labels enables that MvDH can generate more discriminative hash codes, which are optimal for classification. An efficient alternating algorithm is developed to solve the proposed optimization problem with guaranteed convergence and low computational complexity. The binary codes are optimized via the discrete cyclic coordinate descent (DCC) method to reduce the quantization errors. Extensive experimental results on three large-scale benchmark datasets demonstrate the superiority of the proposed method over several state-of-the-art methods in terms of both accuracy and scalability.
IEEE Transactions on Neural Networks | 2018
Xiaobo Shen; Weiwei Liu; Ivor W. Tsang; Quan-Sen Sun; Yew-Soon Ong
Embedding methods have shown promising performance in multilabel prediction, as they are able to discover the label dependence. However, most methods ignore the correlations between the input and output, such that their learned embeddings are not well aligned, which leads to degradation in prediction performance. This paper presents a formulation for multilabel learning, from the perspective of cross-view learning, that explores the correlations between the input and the output. The proposed method, called Co-Embedding (CoE), jointly learns a semantic common subspace and view-specific mappings within one framework. The semantic similarity structure among the embeddings is further preserved, ensuring that close embeddings share similar labels. Additionally, CoE conducts multilabel prediction through the cross-view <inline-formula> <tex-math notation=LaTeX>
international joint conference on artificial intelligence | 2018
Xiaobo Shen; Shirui Pan; Weiwei Liu; Yew-Soon Ong; Quan-Sen Sun
k
Journal of Visual Communication and Image Representation | 2018
Xiaobo Shen; Yun-Hao Yuan; Fumin Shen; Yang Xu; Quansen Sun
</tex-math></inline-formula> nearest neighborhood (<inline-formula> <tex-math notation=LaTeX>
conference on information and knowledge management | 2017
Jing Chai; Weiwei Liu; Ivor W. Tsang; Xiaobo Shen
k
arXiv: Machine Learning | 2018
Haitao Liu; Yew-Soon Ong; Xiaobo Shen; Jianfei Cai
</tex-math></inline-formula>NN) search among the learned embeddings, which significantly reduces computational costs compared with conventional decoding schemes. A hashing-based model, i.e., Co-Hashing (CoH), is further proposed. CoH is based on CoE, and imposes the binary constraint on continuous latent embeddings. CoH aims to generate compact binary representations to improve the prediction efficiency by benefiting from the efficient <inline-formula> <tex-math notation=LaTeX>
neural information processing systems | 2017
Weiwei Liu; Xiaobo Shen; Ivor W. Tsang
k
IEEE Transactions on Image Processing | 2019
Weiwei Liu; Xiaobo Shen; Bo Du; Ivor W. Tsang; Wenjie Zhang; Xuemin Lin
</tex-math></inline-formula>NN search of multiple labels in the Hamming space. Extensive experiments on various real-world data sets demonstrate the superiority of the proposed methods over the state of the arts in terms of both prediction accuracy and efficiency.
national conference on artificial intelligence | 2018
Xiaobo Shen; Weiwei Liu; Ivor W. Tsang; Quan-Sen Sun; Yew-Soon Ong
Network embedding aims to seek low-dimensional vector representations for network nodes, by preserving the network structure. The network embedding is typically represented in continuous vector, which imposes formidable challenges in storage and computation costs, particularly in largescale applications. To address the issue, this paper proposes a novel discrete network embedding (DNE) for more compact representations. In particular, DNE learns short binary codes to represent each node. The Hamming similarity between two binary embeddings is then employed to well approximate the ground-truth similarity. A novel discrete multi-class classifier is also developed to expedite classification. Moreover, we propose to jointly learn the discrete embedding and classifier within a unified framework to improve the compactness and discrimination of network embedding. Extensive experiments on node classification consistently demonstrate that DNE exhibits lower storage and computational complexity than state-of-the-art network embedding methods, while obtains competitive classification results.
international joint conference on artificial intelligence | 2018
Xiaobo Shen; Weiwei Liu; Yong Luo; Yew-Soon Ong; Ivor W. Tsang
Abstract Multi-view data with each view corresponding to a type of feature generally provides more comprehensive information. Learning from multi-view data is a challenging research topic in pattern recognition. For recognition task, most multi-view learning methods separately learn multi-view dimensionality reduction (MvDR) and classification models. Thus, the connection between the two models has not been well studied. In this paper, we propose a novel multi-view dimensionality reduction and recognition framework, which can establish the connection between MvDR and classification. Specifically, a multi-view dimensionality reduction method, termed as sparse representation regularized multiset canonical correlation analysis (SR2MCC) is first proposed. SR2MCC considers both correlation and sparse discrimination among multiple views. In accord with SR2MCC, a classifier, called multi-view sparse representation based classifier (MvSRC) is further developed. MvSRC performs classification by comparing the reconstruction residuals of different classes among all views. An efficient iterative algorithm is proposed to solve the proposed model. Extensive experiments on the AR, CMU PIE, FERET, and FRGC datasets demonstrate that the proposed framework can achieve superior recognition performance than several state-of-the-art methods.