Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Xiumei Wang is active.

Publication


Featured researches published by Xiumei Wang.


systems man and cybernetics | 2011

Supervised Gaussian Process Latent Variable Model for Dimensionality Reduction

Xinbo Gao; Xiumei Wang; Dacheng Tao; Xuelong Li

The Gaussian process latent variable model (GP-LVM) has been identified to be an effective probabilistic approach for dimensionality reduction because it can obtain a low-dimensional manifold of a data set in an unsupervised fashion. Consequently, the GP-LVM is insufficient for supervised learning tasks (e.g., classification and regression) because it ignores the class label information for dimensionality reduction. In this paper, a supervised GP-LVM is developed for supervised learning tasks, and the maximum a posteriori algorithm is introduced to estimate positions of all samples in the latent variable space. We present experimental evidences suggesting that the supervised GP-LVM is able to use the class label information effectively, and thus, it outperforms the GP-LVM and the discriminative extension of the GP-LVM consistently. The comparison with some supervised classification methods, such as Gaussian process classification and support vector machines, is also given to illustrate the advantage of the proposed method.


IEEE Transactions on Systems, Man, and Cybernetics | 2016

Semi-Supervised Nonnegative Matrix Factorization via Constraint Propagation

Di Wang; Xinbo Gao; Xiumei Wang

As is well known, nonnegative matrix factorization (NMF) is a popular nonnegative dimensionality reduction method which has been widely used in computer vision, document clustering, and image analysis. However, traditional NMF is an unsupervised learning mode which cannot fully utilize the priori or supervised information. To this end, semi-supervised NMF methods have been proposed by incorporating the given supervised information. Nevertheless, when little supervised information is available, the improved performance will be limited. To effectively utilize the limited supervised information, this paper proposed a novel semi-supervised NMF method (CPSNMF) with pairwise constraints. The method propagates both the must-link and cannot-link constraints from the constrained samples to unconstrained samples, so that we can get the constraint information of the entire data set. Then, this information is reflected to the adjustment of data weight matrix. Finally, the weight matrix is incorporated as a regularization term to the NMF objective function. Therefore, the proposed method can fully utilize the constraint information to keep the geometry of the data distribution. Furthermore, the proposed CPSNMF is explored with two formulations and corresponding update rules are provided to solve the optimization problems. Thorough experiments on standard databases show the superior performance of the proposed method.


Pattern Recognition | 2011

Transfer latent variable model based on divergence analysis

Xinbo Gao; Xiumei Wang; Xuelong Li; Dacheng Tao

Latent variable models are powerful dimensionality reduction approaches in machine learning and pattern recognition. However, this kind of methods only works well under a necessary and strict assumption that the training samples and testing samples are independent and identically distributed. When the samples come from different domains, the distribution of the testing dataset will not be identical with the training dataset. Therefore, the performance of latent variable models will be degraded for the reason that the parameters of the training model do not suit for the testing dataset. This case limits the generalization and application of the traditional latent variable models. To handle this issue, a transfer learning framework for latent variable model is proposed which can utilize the distance (or divergence) of the two datasets to modify the parameters of the obtained latent variable model. So we do not need to rebuild the model and only adjust the parameters according to the divergence, which will adopt different datasets. Experimental results on several real datasets demonstrate the advantages of the proposed framework.


Neurocomputing | 2010

Semi-supervised Gaussian process latent variable model with pairwise constraints

Xiumei Wang; Xinbo Gao; Yuan Yuan; Dacheng Tao; Jie Li

In machine learning, Gaussian process latent variable model (GP-LVM) has been extensively applied in the field of unsupervised dimensionality reduction. When some supervised information, e.g., pairwise constraints or labels of the data, is available, the traditional GP-LVM cannot directly utilize such supervised information to improve the performance of dimensionality reduction. In this case, it is necessary to modify the traditional GP-LVM to make it capable of handing the supervised or semi-supervised learning tasks. For this purpose, we propose a new semi-supervised GP-LVM framework under the pairwise constraints. Through transferring the pairwise constraints in the observed space to the latent space, the constrained priori information on the latent variables can be obtained. Under this constrained priori, the latent variables are optimized by the maximum a posteriori (MAP) algorithm. The effectiveness of the proposed algorithm is demonstrated with experiments on a variety of data sets.


IEEE Transactions on Image Processing | 2016

Multimodal Discriminative Binary Embedding for Large-Scale Cross-Modal Retrieval

Di Wang; Xinbo Gao; Xiumei Wang; Lihuo He; Bo Yuan

Multimodal hashing, which conducts effective and efficient nearest neighbor search across heterogeneous data on large-scale multimedia databases, has been attracting increasing interest, given the explosive growth of multimedia content on the Internet. Recent multimodal hashing research mainly aims at learning the compact binary codes to preserve semantic information given by labels. The overwhelming majority of these methods are similarity preserving approaches which approximate pairwise similarity matrix with Hamming distances between the to-be-learnt binary hash codes. However, these methods ignore the discriminative property in hash learning process, which results in hash codes from different classes undistinguished, and therefore reduces the accuracy and robustness for the nearest neighbor search. To this end, we present a novel multimodal hashing method, named multimodal discriminative binary embedding (MDBE), which focuses on learning discriminative hash codes. First, the proposed method formulates the hash function learning in terms of classification, where the binary codes generated by the learned hash functions are expected to be discriminative. And then, it exploits the label information to discover the shared structures inside heterogeneous data. Finally, the learned structures are preserved for hash codes to produce similar binary codes in the same class. Hence, the proposed MDBE can preserve both discriminability and similarity for hash codes, and will enhance retrieval accuracy. Thorough experiments on benchmark data sets demonstrate that the proposed method achieves excellent accuracy and competitive computational efficiency compared with the state-of-the-art methods for large-scale cross-modal retrieval task.


Neurocomputing | 2015

Semi-supervised constraints preserving hashing

Di Wang; Xinbo Gao; Xiumei Wang

With the ever-increasing amount of multimedia data on the web, hashing-based approximate nearest neighbor search methods have attracted significant attention due to its remarkable efficiency gains and storage reductions. Traditional unsupervised hashing methods are designed for preserving distance metric similarity which may lead to semantic gap among the high-level semantic similarities. Recently, attentions have been paid to semi-supervised hashing methods which can preserve data?s a few available semantic similarities (usually given in terms of labels, pairwise constraints, tags, etc.). However, these methods often preserve semantic similarities for low-dimensional embeddings. When converting low-dimensional embeddings into binary codes, the quantization error will be accumulated thus resulting in performance deterioration. To this end, we propose a novel semi-supervised hashing method which preserves pairwise constraints for both low-dimensional embeddings and binary codes. It first represents data points by cluster centers to preserve data neighborhood structure and reduce the dimensionality. Then the constraint information is fully utilized to embed the derived data representations into a discriminative low-dimensional space by maximizing discriminative Hamming distance and data variance. After that, optimal binary codes are obtained by further preserving the semantic similarities in the process of quantizing the low-dimensional embeddings. By utilizing constraint information in the quantization process, the proposed method can fully preserve pairwise semantic similarities for binary codes thus leading to better retrieval performance. Thorough experiments on standard databases show the superior performance of the proposed method.


Neurocomputing | 2016

A novel dimensionality reduction method with discriminative generalized eigen-decomposition

Xiumei Wang; Weifang Liu; Jie Li; Xinbo Gao

Dimensionality reduction plays a critical role in machine learning and computer vision for past decades. In this paper, we propose a discriminative dimensionality reduction method based on generalized eigen-decomposition. Firstly, we define the discriminative framework between pairwise classes inspired by the signal to noise ratio. Then the metric is given for intra-class compactness and inter-class separation. Finally, the framework for one against one class can be easily extended to one against all classes. Compared with traditional supervised dimensionality reduction methods, the proposed method can catch discriminative directions for pairwise classes rather than for all classes. Furthermore, it also can deal with non-Gaussian distributed data. The experimental results show that the proposed model can achieve high precisions in classification tasks.


Neurocomputing | 2016

Dynamic aurora sequence recognition using Volume Local Directional Pattern with local and global features

Bing Han; Yating Song; Xinbo Gao; Xiumei Wang

Aurora event consists of the spatial structure and temporal evolution of aurora luminosity, which attributes to the effects of the solar wind-magnetosphere interaction and the physics of the magnetosphere-ionosphere interaction. Dynamic aurora event provides a meaningful projection of effects from plasma processes of outer space and also reveals some certain physical phenomenon and principle. Aurora sequence recognition is one of the key procedure in the analysis of dynamic aurora event. Lots of effective features for static aurora image classification are proposed. If these features for static image classification are utilized to recognize the dynamic aurora sequence, it will result in higher computational complexity. The dynamic features of aurora sequence are seldom proposed due to its complexity. To this end, this paper proposes an efficient aurora sequence descriptor which combines local and global spatial information with temporal location information, which is called as Volume Local Directional Patterns. The ring-section spatial pyramid partition structure is used in the VLDP code image which is coded by Volume Local Directional Patterns to obtain the local spatial feature. After combining the global feature of VLDP code image, the final RSPLDP feature is obtained. Finally, the STSC (self-tuning spectral clustering) method is used to classify the aurora sequence. The experimental results on the dataset which is captured from All-sky Imager (ASI) at the Chinese Yellow River Station demonstrate the effectiveness of the proposed classification scheme.


Neurocomputing | 2015

A transductive graphical model for single image super-resolution

Peitao Cheng; Yuanying Qiu; Ke Zhao; Xiumei Wang

Abstract The image super-resolution technique plays a critical role in many applications, such as digital entertainments and medical diagnosis. Recently, the super-resolution method has been focused on the neighbor embedding techniques. However, these neighbor embedding based methods cannot produce sparse neighbor weights. Furthermore, these methods would not reach minor reconstructing errors only based on low-resolution patch information, which will result in high computational complexity and large construction errors. This paper presents a novel super-resolution method that incorporates iterative adaptation into neighbor selection and optimizes the model with high-resolution patches. In particular, the proposed model establishes a transductive probabilistic graphical model in light of both the low-resolution and high-resolution patches. The weights of the low-resolution neighbor patches can be treated as priori information of the construction weights for the target high-resolution image. The quality of the desired image is greatly improved in the proposed super-resolution method. Finally, the effectiveness of the proposed algorithm is demonstrated with a variety of experiment results.


IEEE Access | 2017

A New Single Image Super-Resolution Method Based on the Infinite Mixture Model

Peitao Cheng; Yuanying Qiu; Xiumei Wang; Ke Zhao

As a powerful nonparametric Bayesian model, the infinite mixture model has been successfully used in machine learning and computer vision. The success of the infinite mixture model owes to the capability clustering and density estimation. In this paper, we propose a nonparametric Bayesian model for single-image super-resolution. Specifically, we combine the Dirichlet process and Gaussian process regression for estimating the distribution of the training patches and modeling the relationship between the low-resolution and high-resolution patches: 1) the proposed method groups the training patches by utilizing the clustering property of Dirichlet process; 2) the proposed method relates the low-resolution and high-resolution patches by predicting the property of Gaussian process; and 3) the mentioned two points are not independent but jointly learned. Hence, the proposed method can make full use of the nonparametric Bayesian model. First, the Dirichlet process mixture model is used to obtain more accurate clusters for training patches. Second, Gaussian process regression is established on each cluster, and this directly reduces the computational complexity. Finally, the two procedures are learned simultaneously to gain the suitable clusters with the ability of prediction. The parameters can be inferred simply via the Gibbs sampling technique. Thorough super-resolution experiments on various images demonstrate that the proposed method is superior to some state-of-the-art methods.

Collaboration


Dive into the Xiumei Wang's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge