Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Weihua Ou is active.

Publication


Featured researches published by Weihua Ou.


Pattern Recognition | 2014

Robust face recognition via occlusion dictionary learning

Weihua Ou; Xinge You; Dacheng Tao; Pengyue Zhang; Yuan Yan Tang; Ziqi Zhu

Sparse representation based classification (SRC) has recently been proposed for robust face recognition. To deal with occlusion, SRC introduces an identity matrix as an occlusion dictionary on the assumption that the occlusion has sparse representation in this dictionary. However, the results show that SRCs use of this occlusion dictionary is not nearly as robust to large occlusion as it is to random pixel corruption. In addition, the identity matrix renders the expanded dictionary large, which results in expensive computation. In this paper, we present a novel method, namely structured sparse representation based classification (SSRC), for face recognition with occlusion. A novel structured dictionary learning method is proposed to learn an occlusion dictionary from the data instead of an identity matrix. Specifically, a mutual incoherence of dictionaries regularization term is incorporated into the dictionary learning objective function which encourages the occlusion dictionary to be as independent as possible of the training sample dictionary. So that the occlusion can then be sparsely represented by the linear combination of the atoms from the learned occlusion dictionary and effectively separated from the occluded face image. The classification can thus be efficiently carried out on the recovered non-occluded face images and the size of the expanded dictionary is also much smaller than that used in SRC. The extensive experiments demonstrate that the proposed method achieves better results than the existing sparse representation based face recognition methods, especially in dealing with large region contiguous occlusion and severe illumination variation, while the computational cost is much lower.


Pattern Recognition | 2015

An adaptive hybrid pattern for noise-robust texture analysis

Ziqi Zhu; Xinge You; C. L. Philip Chen; Dacheng Tao; Weihua Ou; Xiubao Jiang; Jixing Zou

Local binary patterns (LBP) achieve great success in texture analysis, however they are not robust to noise. The two reasons for such disadvantage of LBP schemes are (1) they encode the texture spatial structure based only on local information which is sensitive to noise and (2) they use exact values as the quantization thresholds, which make the extracted features sensitive to small changes in the input image. In this paper, we propose a noise-robust adaptive hybrid pattern (AHP) for noised texture analysis. In our scheme, two solutions from the perspective of texture description model and quantization algorithm have been developed to reduce the feature?s noise sensitiveness. First, a hybrid texture description model is proposed. In this model, the global texture spatial structure which is depicted by a global description model is encoded with the primitive microfeature for texture description. Second, we develop an adaptive quantization algorithm in which equal probability quantization is utilized to achieve the maximum partition entropy. Higher noise-tolerance can be obtained with the minimum lost information in the quantization process. The experimental results of texture classification on two texture databases with three different types of noise show that our approach leads significant improvement in noised texture analysis. Furthermore, our scheme achieves state-of-the-art performance in noisy face recognition. HighlightsA hybrid texture description model is proposed for noise-robust texture modeling.An adaptive quantization algorithm is designed for robust angular space quantization.Based on the new description model and quantization algorithm, we develop the AHP.Experimental results demonstrate the significant improvement achieved by our scheme.


Pattern Recognition | 2016

Sparse discriminative multi-manifold embedding for one-sample face identification

Pengyue Zhang; Xinge You; Weihua Ou; C. L. Philip Chen; Yiu-ming Cheung

In this paper, we study the problem of face identification from only one training sample per person (OSPP). For a face identification system, the most critical obstacles towards real-world applications are often caused by the disguised, corrupted and varying illuminated images in limited sample sets. Meanwhile, storing fewer training samples would essentially reduce the cost for collecting, storing and processing data. Unfortunately, most methods in the literature basically need large training sets for good representation and generation abilities and would fail if there is only one training sample per person. In this paper, we propose a two-step scheme for the OSPP problem by posing it as a representation and matching problem. For the representation step, we present a novel manifold embedding algorithm, namely sparse discriminative multi-manifold embedding (SDMME), to learn the intrinsic representation beneath the raw data. We construct two sparse graphs to measure the sample similarity, based on two structured dictionaries. Multiple feature spaces are learned to simultaneously minimize the bias from the subspace of the same class and maximize the distances to the subspaces of other classes. For the matching step, we use a distance metric based on the manifold structure to identify the person. Extensive experiments demonstrate that the proposed method outperforms other state-of-the-art methods for the problem of one-sample face identification, while the robustness with occlusion and illumination variances highlights the contribution of our work. A one-sample face identification scheme is proposed based on sparse discriminative multi-manifold embedding;Based on structured sparse graphs, a novel manifold embedding technique is proposed for discriminative feature learning;A global manifold distance is introduced for recognition;Much better results have been achieved than current methods.


IEEE Transactions on Circuits and Systems for Video Technology | 2014

Local Metric Learning for Exemplar-Based Object Detection

Xinge You; Qiang Li; Dacheng Tao; Weihua Ou; Mingming Gong

Object detection has been widely studied in the computer vision community and it has many real applications, despite its variations, such as scale, pose, lighting, and background. Most classical object detection methods heavily rely on category-based training to handle intra-class variations. In contrast to classical methods that use a rigid category-based representation, exemplar-based methods try to model variations among positives by learning from specific positive samples. However, current existing exemplar-based methods either fail to use any training information or suffer from a significant performance drop when few exemplars are available. In this paper, we design a novel local metric learning approach to well handle exemplar-based object detection task. The main works are two-fold: 1) a novel local metric learning algorithm called exemplar metric learning (EML) is designed and 2) an exemplar-based object detection algorithm based on EML is implemented. We evaluate our method on two generic object detection data sets: UIUC-Car and UMass FDDB. Experiments show that compared with other exemplar-based methods, our approach can effectively enhance object detection performance when few exemplars are available.


Neurocomputing | 2016

Multi-view non-negative matrix factorization by patch alignment framework with view consistency

Weihua Ou; Shujian Yu; Gai Li; Jian Lu; Kesheng Zhang; Gang Xie

Multi-view non-negative matrix factorization (NMF) has been developed to learn the latent representation from multi-view non-negative data in recent years. To make the representation more meaningful, previous works mainly exploit either the consensus information or the complementary information from different views. However, the latent local geometric structure of each view is always ignored. In this paper, we develop a novel multi-view NMF by patch alignment framework with view consistency. Different from previous works, we take the local geometric structure of each view into consideration, and penalize the disagreement of different views at the same time. More specifically, given a data in each view, we construct a local patch utilizing locally linear embedding to preserve its local geometrical structure, and obtain the global representation under the whole alignment strategy. Meanwhile, for different views, we make the representations of views to approximate the latent representation shared by different views via considering the view consistency. We adopt the correntropy-induced metric to measure the reconstruction error and employ the half-quadratic technique to solve the optimization problem. The experimental results demonstrate the proposed method can achieve satisfactory performance compared with single-view methods and other existing multi-view NMF methods.


Neurocomputing | 2016

Pairwise probabilistic matrix factorization for implicit feedback collaborative filtering

Gai Li; Weihua Ou

Implicit feedback collaborative filtering has attracted a lot of attention in collaborative filtering, which is called one-class collaborative filtering (OCCF). However, the low recommendation accuracy and the high cost of previous methods impede its generalization in real scenarios. In this paper, we develop a new model named pairwise probabilistic matrix factorization (PPMF) by using the advantages of RankRLS. PPMF model takes RankRLS integrated with PMF (probabilistic matrix factorization) to learn the relative preference for items. Different from previous works, PPMF minimizes the average number of inversions in ranking rather than maximize the gaps of the binary predicted values for OCCF problem. Meanwhile, we propose to optimize the PPMF model by the pointwise stochastic gradient descent algorithm based on bootstrap sampling, which is more effective for parameter learning than the original optimization method used in previous works. Experiments on two datasets show that PPMF model achieves satisfactory performance and outperforms the state-of-the-art implicit feedback collaborative ranking models by using different evaluation metrics.


IEEE Transactions on Neural Networks | 2015

Robust Nonnegative Patch Alignment for Dimensionality Reduction

Xinge You; Weihua Ou; Chun Lung Philip Chen; Qiang Li; Ziqi Zhu; Yuan Yan Tang

Dimensionality reduction is an important method to analyze high-dimensional data and has many applications in pattern recognition and computer vision. In this paper, we propose a robust nonnegative patch alignment for dimensionality reduction, which includes a reconstruction error term and a whole alignment term. We use correntropy-induced metric to measure the reconstruction error, in which the weight is learned adaptively for each entry. For the whole alignment, we propose locality-preserving robust nonnegative patch alignment (LP-RNA) and sparsity-preserviing robust nonnegative patch alignment (SP-RNA), which are unsupervised and supervised, respectively. In the LP-RNA, we propose a locally sparse graph to encode the local geometric structure of the manifold embedded in high-dimensional space. In particular, we select large


international symposium on neural networks | 2015

Kernel normalized mixed-norm algorithm for system identification

Shujian Yu; Xinge You; Kexin Zhao; Weihua Ou; Yuan Yan Tang

p


Neurocomputing | 2016

STFT-like time frequency representations of nonstationary signal with arbitrary sampling schemes

Shujian Yu; Xinge You; Weihua Ou; Xiubao Jiang; Kexin Zhao; Ziqi Zhu; Yi Mou; Xinyi Zhao

-nearest neighbors for each sample, then obtain the sparse representation with respect to these neighbors. The sparse representation is used to build a graph, which simultaneously enjoys locality, sparseness, and robustness. In the SP-RNA, we simultaneously use local geometric structure and discriminative information, in which the sparse reconstruction coefficient is used to characterize the local geometric structure and weighted distance is used to measure the separability of different classes. For the induced nonconvex objective function, we formulate it into a weighted nonnegative matrix factorization based on half-quadratic optimization. We propose a multiplicative update rule to solve this function and show that the objective function converges to a local optimum. Several experimental results on synthetic and real data sets demonstrate that the learned representation is more discriminative and robust than most existing dimensionality reduction methods.


international conference on signal and information processing | 2015

Single image rain streaks removal based on self-learning and structured sparse representation

Shujian Yu; Weihua Ou; Xinge You; Yi Mou; Xiubao Jiang; Yuan Yan Tang

Kernel methods provide an efficient nonparametric model to produce adaptive nonlinear filtering (ANF) algorithms. However, in practical applications, standard squared error based kernel methods suffer from two main issues: (1) a constant step size is used, which degrades the algorithm performance in non-stationary environment, and (2) additive noises are assumed to follow Gaussian distribution, while in practice the noises are generally non-Gaussian and follow other statistical distributions. To address these two issues simultaneously, this paper proposes a novel kernel normalized mixed-norm (KNMN) algorithm. Compared to the standard squared error based kernel methods, the KNMN algorithm extends the linear mixed-norm adaptive filtering algorithms to Reproducing Kernel Hilbert Space (RKHS) and introduces a normalized step size as well as adaptive mixing parameter. We also conduct the mean square convergence analysis and demonstrate the desirable performance of the KNMN algorithm in solving the system identification problem.

Collaboration


Dive into the Weihua Ou's collaboration.

Top Co-Authors

Avatar

Xinge You

Huazhong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Xiubao Jiang

Huazhong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Shujian Yu

Huazhong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ziqi Zhu

Huazhong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Yi Mou

Huazhong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Pengyue Zhang

Huazhong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge