Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Laiyun Qing is active.

Publication


Featured researches published by Laiyun Qing.


computer vision and pattern recognition | 2011

Visual saliency detection by spatially weighted dissimilarity

Lijuan Duan; Chunpeng Wu; Jun Miao; Laiyun Qing; Yu Fu

In this paper, a new visual saliency detection method is proposed based on the spatially weighted dissimilarity. We measured the saliency by integrating three elements as follows: the dissimilarities between image patches, which were evaluated in the reduced dimensional space, the spatial distance between image patches and the central bias. The dissimilarities were inversely weighted based on the corresponding spatial distance. A weighting mechanism, indicating a bias for human fixations to the center of the image, was employed. The principal component analysis (PCA) was the dimension reducing method used in our system. We extracted the principal components (PCs) by sampling the patches from the current image. Our method was compared with four saliency detection approaches using three image datasets. Experimental results show that our method outperforms current state-of-the-art methods on predicting human fixations.


Pattern Analysis and Applications | 2009

Are Gabor phases really useless for face recognition

Wenchao Zhang; Shiguang Shan; Laiyun Qing; Xilin Chen; Wen Gao

Gabor features have been recognized as one of the best representations for face recognition. Usually, only the magnitudes of the Gabor coefficients are thought of as being useful for face recognition, while the phases of the Gabor features are deemed to be useless and thus usually ignored by face recognition researchers. However, in this paper, our findings show that the latter should be reconsidered. By encoding Gabor phases through local binary patterns and local histograms, we have achieved very impressive recognition results, which are comparable to those of Gabor magnitudes-based methods. The results of our experiments also indicate that, by combining the phases with the magnitudes, higher accuracy can be achieved. Such observations suggest that more attention should be paid to the Gabor phases for face recognition.


international conference on acoustics, speech, and signal processing | 2005

Empirical comparisons of several preprocessing methods for illumination insensitive face recognition

Bo Du; Shiguang Shan; Laiyun Qing; Wen Gao

Illumination variation is one of the bottlenecks of face recognition systems. Many approaches to coping with illumination variations have been proposed. They can be categorized into model-based and preprocessing-based. Although the model-based approaches seem better in theory, they commonly introduce more constraints, which make them not practical enough for real applications. On the other hand, the preprocessing approaches commonly exploit simple and efficient image processing techniques. Typical approaches based on image processing include histogram equalization (HE), histogram specification (HS), logarithm transform (Log), gamma intensity correction (GIC), and self-quotient image (SQI). We have performed extensive experiments to analyze and compare these methods empirically by evaluating them on three large-scale face databases: CMU-PIE database, FERET database; CAS-PEAL database. Our experimental results show that HE, HS and GIC can improve recognition performance for images both with and without illumination variation, while Log and SQI may decrease the recognition rate for face images without much illumination variation, though they may facilitate the recognition of face images with illumination variation.


international conference on pattern recognition | 2006

Face Recognition under Varying Lighting Based on the Probabilistic Model of Gabor Phase

Laiyun Qing; Shiguang Shan; Xilin Chen; Wen Gao

This paper present a novel method for robust illumination-tolerant face recognition based on the Gabor phase and a probabilistic similarity measure. Invited by the work in Eigenphases of J. Lou, et al. (2002) by using the phase spectrum of face images, we use the phase information of the multi-resolution and multi-orientation Gabor filters. We show that the Gabor phase has more discriminative information and it is tolerate to illumination variations. Then we use a probabilistic similarity measure based on a Bayesian (MAP) analysis of the difference between the Gabor phases of two face images. We train the model using some images in the illumination subset of CMU-PIE database and test on the other images of CMU-PIE database and the Yale B database and get comparative results


International Journal of Pattern Recognition and Artificial Intelligence | 2005

FACE RECOGNITION UNDER GENERIC ILLUMINATION BASED ON HARMONIC RELIGHTING

Laiyun Qing; Shiguang Shan; Wen Gao; Bo Du

The performances of the current face recognition systems suffer heavily from the variations in lighting. To deal with this problem, this paper presents an illumination normalization approach by relighting face images to a canonical illumination based on the harmonic images model. Benefiting from the observations that human faces share similar shape, and the albedos of the face surfaces are quasi-constant, we first estimate the nine low-frequency components of the illumination from the input facial image. The facial image is then normalized to the canonical illumination by re-rendering it using the illumination ratio image technique. For the purpose of face recognition, two kinds of canonical illuminations, the uniform illumination and a frontal flash with the ambient lights, are considered, among which the former encodes merely the texture information, while the latter encodes both the texture and shading information. Our experiments on the CMU-PIE face database and the Yale B face database have shown that the proposed relighting normalization can significantly improve the performance of a face recognition system when the probes are collected under varying lighting conditions.


european conference on computer vision | 2010

Lighting aware preprocessing for face recognition across varying illumination

Hu Han; Shiguang Shan; Laiyun Qing; Xilin Chen; Wen Gao

Illumination variation is one of intractable yet crucial problems in face recognition and many lighting normalization approaches have been proposed in the past decades. Nevertheless, most of them preprocess all the face images in the same way thus without considering the specific lighting in each face image. In this paper, we propose a lighting aware preprocessing (LAP) method, which performs adaptive preprocessing for each testing image according to its lighting attribute. Specifically, the lighting attribute of a testing face image is first estimated by using spherical harmonic model. Then, a von Mises-Fisher (vMF) distribution learnt from a training set is exploited to model the probability that the estimated lighting belongs to normal lighting. Based on this probability, adaptive preprocessing is performed to normalize the lighting variation in the input image. Extensive experiments on Extended YaleB and Multi-PIE face databases show the effectiveness of our proposed method.


computer vision and pattern recognition | 2008

Unified Principal Component Analysis with generalized Covariance Matrix for face recognition

Shiguang Shan; Bo Cao; Yu Su; Laiyun Qing; Xilin Chen; Wen Gao

Recently, 2DPCA and its variants have attracted much attention in face recognition area. In this paper, some efforts are made to discover the underlying fundaments of these methods, and a novel framework called unified principal component analysis (UPCA) is proposed. First, we introduce a novel concept, named generalized covariance matrix (GCM), which is naturally derived from the traditional covariance matrix (CM). Each element of GCM is a generalized covariance of two random vectors rather than two scalar variables in CM. Based on GCM, the UPCA framework is proposed, from which the traditional PCA and its 2D counterparts can be deduced as special cases. Furthermore, under the UPCA framework, we not only revisit the existing 2D PCA methods and their limitations, but also propose two new methods: the grid-sampling method (GridPCA) and the intra-group correlation reduction method. Extensive experimental results on the FERET face database support the theoretical analysis and validate the feasibility of the proposed methods.


international conference on computer vision | 2015

Activity Auto-Completion: Predicting Human Activities from Partial Videos

Zhen Xu; Laiyun Qing; Jun Miao

In this paper, we propose an activity auto-completion (AAC) model for human activity prediction by formulating activity prediction as a query auto-completion (QAC) problem in information retrieval. First, we extract discriminative patches in frames of videos. A video is represented based on these patches and divided into a collection of segments, each of which is regarded as a character typed in the search box. Then a partially observed video is considered as an activity prefix, consisting of one or more characters. Finally, the missing observation of an activity is predicted as the activity candidates provided by the auto-completion model. The candidates are matched against the activity prefix on-the-fly and ranked by a learning-to-rank algorithm. We validate our method on UT-Interaction Set #1 and Set #2 [19]. The experimental results show that the proposed activity auto-completion model achieves promising performance.


international symposium on neural networks | 2014

Constrained Extreme Learning Machine: A novel highly discriminative random feedforward neural network

Wentao Zhu; Jun Miao; Laiyun Qing

In this paper, a novel single hidden layer feedforward neural network, called Constrained Extreme Learning Machine (CELM), is proposed based on Extreme Learning Machine (ELM). In CELM, the connection weights between the input layer and hidden neurons are randomly drawn from a constrained set of difference vectors of between-class samples, rather than an open set of arbitrary vectors. Therefore, the CELM is expected to be more suitable for discriminative tasks, whilst retaining other advantages of ELM. The experimental results are presented to show the high efficiency of the CELM, compared with ELM and some other related learning machines.


Neurocomputing | 2014

Vehicle detection in driving simulation using extreme learning machine

Wentao Zhu; Jun Miao; Jiangbi Hu; Laiyun Qing

Automatically driving based on computer vision has attracted more and more attentions from both research and industrial fields. It has two main challenges, high road and vehicle detection accuracy and real-time performance. To study the two problems, we developed a driving simulation platform in a virtual scene. In this paper, as the first step of final solution, the Extreme Learning Machine (ELM) has been used to detect the virtual roads and vehicles. The Support Vector Machine (SVM) and Back Propagation (BP) network have been used as benchmark. Our experimental results show that the ELM has the fastest performance on road segmentation and vehicle detection with the similar accuracy compared with other techniques.

Collaboration


Dive into the Laiyun Qing's collaboration.

Top Co-Authors

Avatar

Jun Miao

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xilin Chen

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Lijuan Duan

Beijing University of Technology

View shared research outputs
Top Co-Authors

Avatar

Shiguang Shan

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Wentao Zhu

University of California

View shared research outputs
Top Co-Authors

Avatar

Weiqiang Wang

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Fang Fang

Harbin Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Qingming Huang

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Yuanhua Qiao

Beijing University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge