Boyue Wang
Beijing University of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Boyue Wang.
asian conference on computer vision | 2014
Boyue Wang; Yongli Hu; Junbin Gao; Yanfeng Sun; Baocai Yin
Low-rank representation (LRR) has recently attracted great interest due to its pleasing efficacy in exploring low-dimensional subspace structures embedded in data. One of its successful applications is subspace clustering which means data are clustered according to the subspaces they belong to. In this paper, at a higher level, we intend to cluster subspaces into classes of subspaces. This is naturally described as a clustering problem on Grassmann manifold. The novelty of this paper is to generalize LRR on Euclidean space into the LRR model on Grassmann manifold. The new method has many applications in computer vision tasks. The paper conducts the experiments over two real world examples, clustering handwritten digits and clustering dynamic textures. The experiments show the proposed method outperforms a number of existing methods.
IEEE Transactions on Circuits and Systems for Video Technology | 2017
Boyue Wang; Yongli Hu; Junbin Gao; Yanfeng Sun; Baocai Yin
In multicamera video surveillance, it is challenging to represent videos from different cameras properly and fuse them efficiently for specific applications such as human activity recognition and clustering. In this paper, a novel representation for multicamera video data, namely, the product Grassmann manifold (PGM), is proposed to model video sequences as points on the Grassmann manifold and integrate them as a whole in the product manifold form. In addition, with a new geometry metric on the product manifold, the conventional low rank representation (LRR) model is extended onto PGM and the new LRR model can be used for clustering nonlinear data, such as multicamera video data. To evaluate the proposed method, a number of clustering experiments are conducted on several multicamera video data sets of human activity, including the Dongzhimen Transport Hub Crowd action data set, the ACT 42 Human Action data set, and the SKIG action data set. The experiment results show that the proposed method outperforms many state-of-the-art clustering methods.
international joint conference on artificial intelligence | 2017
Boyue Wang; Yongli Hu; Junbin Gao; Yanfeng Sun; Haoran Chen; Muhammad Ali; Baocai Yin
Learning on Grassmann manifold has become popular in many computer vision tasks, with the strong capability to extract discriminative information for imagesets and videos. However, such learning algorithms particularly on high-dimensional Grassmann manifold always involve with significantly high computational cost, which seriously limits the applicability of learning on Grassmann manifold in more wide areas. In this research, we propose an unsupervised dimensionality reduction algorithm on Grassmann manifold based on the Locality Preserving Projections (LPP) criterion. LPP is a commonly used dimensionality reduction algorithm for vector-valued data, aiming to preserve local structure of data in the dimension-reduced space. The strategy is to construct a mapping from higher dimensional Grassmann manifold into the one in a relative low-dimensional with more discriminative capability. The proposed method can be optimized as a basic eigenvalue problem. The performance of our proposed method is assessed on several classification and clustering tasks and the experimental results show its clear advantages over other Grassmann based algorithms.
international joint conference on artificial intelligence | 2018
Boyue Wang; Yongli Hu; Junbin Gao; Yanfeng Sun; Baocai Yin
Inspired by low rank representation and sparse subspace clustering acquiring success, ones attempt to simultaneously perform low rank and sparse constraints on the affinity matrix to improve the performance. However, it is just a trade-off between these two constraints. In this paper, we propose a novel Cascaded Low Rank and Sparse Representation (CLRSR) method for subspace clustering, which seeks the sparse expression on the former learned low rank latent representation. By this cascaded way, the sparse and low rank properties of the data are revealed adequately. Additionally, we extent CLRSR onto Grassmann manifolds to deal with multi-dimension data such as imageset or videos. An effective solution and its convergence analysis are also provided. The experimental results demonstrate the proposed method has excellent performance compared with state-of-the-art clustering methods.
Pattern Recognition | 2018
Boyue Wang; Yongli Hu; Junbin Gao; Muhammad Ali; David Tien; Yanfeng Sun; Baocai Yin
Abstract Symmetric Positive semi-Definite (SPD) matrices, as a kind of effective feature descriptors, have been widely used in pattern recognition and computer vision tasks. Affine-invariant metric (AIM) is a popular way to measure the distance between SPD matrices, but it imposes a high computational burden in practice. Compared with AIM, the Log-Euclidean metric embeds the SPD manifold via the matrix logarithm into a Euclidean space in which only classical Euclidean computation is involved. The advantage of using this metric for the non-linear SPD matrices representation of data has been recognized in some domains such as compressed sensing, however one pays little attention to this metric in data clustering. In this paper, we propose a novel Low Rank Representation (LRR) model on SPD matrices space with Log-Euclidean metric (LogELRR), which enables us to handle non-linear data through a linear manipulation manner. To further explore the intrinsic geometry distance between SPD matrices, we embed the SPD matrices into Reproductive Kernel Hilbert Space (RKHS) to form a family of kernels on SPD matrices based on the Log-Euclidean metric and construct a novel kernelized LogELRR method. The clustering results on a wide range of datasets, including object images, facial images, 3D objects, texture images and medical images, show that our proposed methods overcome other conventional clustering methods.
ACM Transactions on Knowledge Discovery From Data | 2018
Boyue Wang; Yongli Hu; Junbin Gao; Yanfeng Sun; Baocai Yin
Clustering is one of the fundamental topics in data mining and pattern recognition. As a prospective clustering method, the subspace clustering has made considerable progress in recent researches, e.g., sparse subspace clustering (SSC) and low rank representation (LRR). However, most existing subspace clustering algorithms are designed for vectorial data from linear spaces, thus not suitable for high-dimensional data with intrinsic non-linear manifold structure. For high-dimensional or manifold data, few research pays attention to clustering problems. The purpose of clustering on manifolds tends to cluster manifold-valued data into several groups according to the mainfold-based similarity metric. This article proposes an extended LRR model for manifold-valued Grassmann data that incorporates prior knowledge by minimizing partial sum of singular values instead of the nuclear norm, namely Partial Sum minimization of Singular Values Representation (GPSSVR). The new model not only enforces the global structure of data in low rank, but also retains important information by minimizing only smaller singular values. To further maintain the local structures among Grassmann points, we also integrate the Laplacian penalty with GPSSVR. The proposed model and algorithms are assessed on a public human face dataset, some widely used human action video datasets and a real scenery dataset. The experimental results show that the proposed methods obviously outperform other state-of-the-art methods.
international congress on image and signal processing | 2016
Muhammad Ali; Michael Antolovich; Boyue Wang
Our main focus in this paper is on the matrix-variate Fisher distribution for the product case of Stiefel manifolds and perform density estimation or classification via straightforward way of Maximum Likelihood Estimation (MLE) of parameter. The novelty of our proposed method is its strict dependency on normalisation constant appearing in parametric models, i.e., we have implemented our proposed matrix Fisher density function for classification with normalisation constant included in a more general context for Stiefel manifolds. An accurate way of calculating the log-likehood function of matrix based normalising constant and its practicability to matrix variate parametric modelling has been a big hurdle and is treated in this paper for classification on Stiefel manifolds. Instead of ad-hoc approximation of normalisation constant we have considered the method of Saddle Point Approximation (SPA). With the inclusion of calculated normalising constant with matrix-variate Fisher parametric model, the direct MLE with simple Bayesian approach for numerical experiments is employed for classification example using Synthetic and real World dataset, with promising accuracy.
international congress on image and signal processing | 2016
Muhammad Ali; Michael Antolovich; Boyue Wang
Applicability of the standard Maximum Likelihood Estimation technique for utilising parametric models to real World applications has been a real hurdle due to the difficult calculation of normalising constant since decades. The aim of this paper is therefore to fill some gape by demonstrating a simple Maximum Likelihood Estimation (MLE) technique for classification on Grassmann manifolds sing the matrix-variate Bingham distributional model. The most challenging task in working with such high dimensional parametric models is the calculation of matrix based normalising constant. For calculating the values of normalising constants we propose some ad-hoc technique for approximating the values of normalising constants. These normalising constants are mostly represented by special functions, e.g., Hypergeometric functions, Confluent Hypergeometric functions and Bessel functions etc. We have approximated these special functions by Taylor series approximation and asymptotic series approximation methods, i.e by considering the first few terms of the series most important for calculating the numerical values of normalising constant. The calculated numerical value then boost the applicability of matrix-variate Bingham model for inferences via a simple Bayesian approach. Although these ad-hoc techniques does not have close-form solutions, but still they can produce very nice results for special cases of concentrated parameters.
national conference on artificial intelligence | 2016
Boyue Wang; Yongli Hu; Junbin Gao; Yanfeng Sun; Baocai Yin
arXiv: Computer Vision and Pattern Recognition | 2015
Boyue Wang; Yongli Hu; Junbin Gao; Yanfeng Sun; Baocai Yin