Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Shijie Xiao is active.

Publication


Featured researches published by Shijie Xiao.


IEEE Transactions on Neural Networks | 2016

A Unified Framework for Representation-Based Subspace Clustering of Out-of-Sample and Large-Scale Data

Xi Peng; Huajin Tang; Lei Zhang; Zhang Yi; Shijie Xiao

Under the framework of spectral clustering, the key of subspace clustering is building a similarity graph, which describes the neighborhood relations among data points. Some recent works build the graph using sparse, low-rank, and ℓ2-norm-based representation, and have achieved the state-of-the-art performance. However, these methods have suffered from the following two limitations. First, the time complexities of these methods are at least proportional to the cube of the data size, which make those methods inefficient for solving the large-scale problems. Second, they cannot cope with the out-of-sample data that are not used to construct the similarity graph. To cluster each out-of-sample datum, the methods have to recalculate the similarity graph and the cluster membership of the whole data set. In this paper, we propose a unified framework that makes the representation-based subspace clustering algorithms feasible to cluster both the out-of-sample and the large-scale data. Under our framework, the large-scale problem is tackled by converting it as the out-of-sample problem in the manner of sampling, clustering, coding, and classifying. Furthermore, we give an estimation for the error bounds by treating each subspace as a point in a hyperspace. Extensive experimental results on various benchmark data sets show that our methods outperform several recently proposed scalable methods in clustering a large-scale data set.


european conference on computer vision | 2014

Weighted Block-Sparse Low Rank Representation for Face Clustering in Videos

Shijie Xiao; Mingkui Tan; Dong Xu

In this paper, we study the problem of face clustering in videos. Specifically, given automatically extracted faces from videos and two kinds of prior knowledge (the face track that each face belongs to, and the pairs of faces that appear in the same frame), the task is to partition the faces into a given number of disjoint groups, such that each group is associated with one subject. To deal with this problem, we propose a new method called weighted block-sparse low rank representation (WBSLRR) which considers the available prior knowledge while learning a low rank data representation, and also develop a simple but effective approach to obtain the clustering result of faces. Moreover, after using several acceleration techniques, our proposed method is suitable for solving large-scale problems. The experimental results on two benchmark datasets demonstrate the effectiveness of our approach.


computer vision and pattern recognition | 2013

Learning by Associating Ambiguously Labeled Images

Zinan Zeng; Shijie Xiao; Kui Jia; Tsung-Han Chan; Shenghua Gao; Dong Xu; Yi Ma

We study in this paper the problem of learning classifiers from ambiguously labeled images. For instance, in the collection of new images, each image contains some samples of interest (\emph{e.g.,} human faces), and its associated caption has labels with the true ones included, while the sample-label association is unknown. The task is to learn classifiers from these ambiguously labeled images and generalize to new images. An essential consideration here is how to make use of the information embedded in the relations between samples and labels, both within each image and across the image set. To this end, we propose a novel framework to address this problem. Our framework is motivated by the observation that samples from the same class repetitively appear in the collection of ambiguously labeled training images, while they are just ambiguously labeled in each image. If we can identify samples of the same class from each image and associate them across the image set, the matrix formed by the samples from the same class would be ideally low-rank. By leveraging such a low-rank assumption, we can simultaneously optimize a partial permutation matrix (PPM) for each image, which is formulated in order to exploit all information between samples and labels in a principled way. The obtained PPMs can be readily used to assign labels to samples in training images, and then a standard SVM classifier can be trained and used for unseen data. Experiments on benchmark datasets show the effectiveness of our proposed method.


IEEE Transactions on Neural Networks | 2016

Robust Kernel Low-Rank Representation

Shijie Xiao; Mingkui Tan; Dong Xu; Zhao Yang Dong

Recently, low-rank representation (LRR) has shown promising performance in many real-world applications such as face clustering. However, LRR may not achieve satisfactory results when dealing with the data from nonlinear subspaces, since it is originally designed to handle the data from linear subspaces in the input space. Meanwhile, the kernel-based methods deal with the nonlinear data by mapping it from the original input space to a new feature space through a kernel-induced mapping. To effectively cope with the nonlinear data, we first propose the kernelized version of LRR in the clean data case. We also present a closed-form solution for the resultant optimization problem. Moreover, to handle corrupted data, we propose the robust kernel LRR (RKLRR) approach, and develop an efficient optimization algorithm to solve it based on the alternating direction method. In particular, we show that both the subproblems in our optimization algorithm can be efficiently and exactly solved, and it is guaranteed to obtain a globally optimal solution. Besides, our proposed algorithm can also solve the original LRR problem, which is a special case of our RKLRR when using the linear kernel. In addition, based on our new optimization technique, the kernelization of some variants of LRR can be similarly achieved. Comprehensive experiments on synthetic data sets and real-world data sets clearly demonstrate the efficiency of our algorithm, as well as the effectiveness of RKLRR and the kernelization of two variants of LRR.


computer vision and pattern recognition | 2015

FaLRR: A fast low rank representation solver

Shijie Xiao; Wen Li; Dong Xu; Dacheng Tao

Low rank representation (LRR) has shown promising performance for various computer vision applications such as face clustering. Existing algorithms for solving LRR usually depend on its two-variable formulation which contains the original data matrix. In this paper, we develop a fast LRR solver called FaLRR, by reformulating LRR as a new optimization problem with regard to factorized data (which is obtained by skinny SVD of the original data matrix). The new formulation benefits the corresponding optimization and theoretical analysis. Specifically, to solve the resultant optimization problem, we propose a new algorithm which is not only efficient but also theoretically guaranteed to obtain a globally optimal solution. Regarding the theoretical analysis, the new formulation is helpful for deriving some interesting properties of LRR. Last but not least, the proposed algorithm can be readily incorporated into an existing distributed framework of LRR for further acceleration. Extensive experiments on synthetic and real-world datasets demonstrate that our FaLRR achieves order-of-magnitude speedup over existing LRR solvers, and the efficiency can be further improved by incorporating our algorithm into the distributed framework of LRR.


IEEE Transactions on Neural Networks | 2015

Automatic Face Naming by Learning Discriminative Affinity Matrices From Weakly Labeled Images

Shijie Xiao; Dong Xu; Jianxin Wu

Given a collection of images, where each image contains several faces and is associated with a few names in the corresponding caption, the goal of face naming is to infer the correct name for each face. In this paper, we propose two new methods to effectively solve this problem by learning two discriminative affinity matrices from these weakly labeled images. We first propose a new method called regularized low-rank representation by effectively utilizing weakly supervised information to learn a low-rank reconstruction coefficient matrix while exploring multiple subspace structures of the data. Specifically, by introducing a specially designed regularizer to the low-rank representation method, we penalize the corresponding reconstruction coefficients related to the situations where a face is reconstructed by using face images from other subjects or by using itself. With the inferred reconstruction coefficient matrix, a discriminative affinity matrix can be obtained. Moreover, we also develop a new distance metric learning method called ambiguously supervised structural metric learning by using weakly supervised information to seek a discriminative distance metric. Hence, another discriminative affinity matrix can be obtained using the similarity matrix (i.e., the kernel matrix) based on the Mahalanobis distances of the data. Observing that these two affinity matrices contain complementary information, we further combine them to obtain a fused affinity matrix, based on which we develop a new iterative scheme to infer the name of each face. Comprehensive experiments demonstrate the effectiveness of our approach.


computer vision and pattern recognition | 2016

Proximal Riemannian Pursuit for Large-Scale Trace-Norm Minimization

Mingkui Tan; Shijie Xiao; Junbin Gao; Dong Xu; Anton van den Hengel; Qinfeng Shi

Trace-norm regularization plays an important role in many areas such as computer vision and machine learning. When solving general large-scale trace-norm regularized problems, existing methods may be computationally expensive due to many high-dimensional truncated singular value decompositions (SVDs) or the unawareness of matrix ranks. In this paper, we propose a proximal Riemannian pursuit (PRP) paradigm which addresses a sequence of trace-norm regularized subproblems defined on nonlinear matrix varieties. To address the subproblem, we extend the proximal gradient method on vector space to nonlinear matrix varieties, in which the SVDs of intermediate solutions are maintained by cheap low-rank QR decompositions, therefore making the proposed method more scalable. Empirical studies on several tasks, such as matrix completion and low-rank representation based subspace clustering, demonstrate the competitive performance of the proposed paradigms over existing methods.


IEEE Transactions on Cognitive and Developmental Systems | 2018

Orthogonal Principal Coefficients Embedding for Unsupervised Subspace Learning

Xinxing Xu; Shijie Xiao; Zhang Yi; Xi Peng; Yong Liu

As a recently proposed method for subspace learning, principal coefficients embedding (PCE) method can automatically determine the dimension of the feature space and robustly handle various corruptions in real-world applications. However, the projection matrix learned by PCE is not orthogonal, so the original data may be reconstructed improperly. To address this issue, we proposed a new method termed orthogonal PCE (OPCE). OPCE cannot only automatically determine the dimension of the feature space, but also additionally considers the orthogonal property of the projection matrix for better discriminating ability. Moreover, OPCE can be solved in closed-form, thus making it computational efficient. Extensive experimental results from multiple benchmark data sets demonstrate the effectiveness and computational efficiency of the proposed method.


international joint conference on artificial intelligence | 2016

Deep subspace clustering with sparsity prior

Xi Peng; Shijie Xiao; Jiashi Feng; Wei-Yun Yau; Zhang Yi


arXiv: Computer Vision and Pattern Recognition | 2017

Deep Sparse Subspace Clustering.

Xi Peng; Jiashi Feng; Shijie Xiao; Jiwen Lu; Zhang Yi; Shuicheng Yan

Collaboration


Dive into the Shijie Xiao's collaboration.

Top Co-Authors

Avatar

Dong Xu

University of Sydney

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mingkui Tan

South China University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jiashi Feng

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Qinfeng Shi

University of Adelaide

View shared research outputs
Top Co-Authors

Avatar

Joey Tianyi Zhou

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge