Weibin Yang
Chongqing University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Weibin Yang.
Pattern Recognition | 2014
Xiao Luan; Bin Fang; Linghui Liu; Weibin Yang; Jiye Qian
In this paper, we consider the problem of recognizing human faces from frontal views with varying illumination, as well as occlusion and disguise. Motivated by the latest research on the recovery of low-rank matrix using robust principal component analysis (RPCA), we present a novel approach of robust face recognition by exploiting the sparse error component obtained by RPCA. Compared with low-rank component, it is revealed that the associated sparse error component exhibits more discriminating information which is of benefit to face identification. We define two descriptors (i.e., sparsity and smoothness) to represent characteristic of the sparse error component, and give two recognition protocols (i.e., the weighted based method and the ratio based method) to classify face images. The efficacy of the proposed approach is verified on publicly available databases (i.e., Extended Yale B and AR) with promising results. Meanwhile, the proposed algorithm manifests robustness since it does not assume any explicit prior knowledge about the illumination conditions, as well as the nature of corrupted and occluded regions. Furthermore, the proposed method is not limited to face recognition, also can be extended to other image-based object recognition. HighlightsThis paper is motivated by robust principal component analysis (RPCA).We exploit the sparse error component to perform face recognition.We define two descriptors (i.e., sparsity and smoothness) to represent the sparse error image.We present the weighted based method and ratio based method to classify face images.Our method shows good performance on public face databases with illumination and occlusion.
IEEE Sensors Journal | 2011
Jiye Qian; Bin Fang; Weibin Yang; Xiao Luan; Hai Nan
We propose a tilt sensing scheme using a physical model with three sensitive axes of microelectromechanical systems (MEMS) accelerometers. Based on the physical model in which the gravitational acceleration is resolved into three components, we propose three numerical models to sense the tilt angle. First, two exact numerical models are presented to measure the gravitational acceleration and its one component along the tilt direction, respectively. The parameters of these two models are specific angles of the physical model, which can be used to assess the configuration of the physical model. Next, the measurement bias model is introduced to reduce the error resulting from the nonlinear relation between the gravitational acceleration and the tilt angle. Third, all the three numerical models are unified into a linear model whose parameters can be efficiently estimated using the least squares method. In the experiments, we evaluate the performance of the proposed scheme by examining the mean, the standard deviation, and the maximum of the errors. The experiment results show that our scheme is able to perform accurate tilt sensing with the average error below 0.1° in the measurement range (0°, 120°).
Neurocomputing | 2013
Weibin Yang; Yuan Yan Tang; Bin Fang; Zhao Wei Shang; Yuewei Lin
This paper proposes a novel method for visual saliency detection based on an universal probabilistic model, which measures the saliency by combining low level features and location prior. We view the task of estimating visual saliency as searching the most conspicuous parts in an image and extract the saliency map by computing the dissimilarity between different regions. We simulate the moving of the center of human visual field, and describe how the center shift process works on visual saliency. Furthermore, multiscale analysis is adopted for improving the robustness of our model. Experimental results on three public image datasets show that the proposed approach outperforms 18 state-of-the-art methods for both salient object detection and human eye fixation prediction.
IEEE Sensors Journal | 2013
Weibin Yang; Bin Fang; Yuan Yan Tang; Jiye Qian; Xudong Qin; Wenhua Yao
This paper proposes a robust inclinometer system using three monaxial microelectromechanical systems accelerometers and three monaxial fluxgate sensors. By formulating a basic three sensitive axes sensor model, we calibrate tilt and azimuth directly through a simple and effective linear model. To improve the accuracy of the proposed model, we present two different optimal solutions to minimize the systematic error, and adopt the interior-reflective Newton method and the sequential quadratic programming method to solve the problems, respectively. Experimental results demonstrate that our system performs excellently with the maximum error of tilt angle 0.09<sup>°</sup> in our applied measurement range (0<sup>°</sup>-120 <sup>°</sup>), and the maximum error of azimuth angle 0.4 <sup>°</sup> in the measurement range (0<sup>°</sup> -360<sup>°</sup>).
international conference on wavelet analysis and pattern recognition | 2009
Weibin Yang; Bin Fang; Yuan Yan Tang; Zhaowei Shang; Dong-Hui Li
A novel first-detect-then-identify approach with SIFT features and discrete wavelet transform for tracking object is proposed in real surveillance scenarios. For accurate and fast moving object detection, discrete wavelet transform is adopted to eliminate the noises of the frames which may cause detection errors, and then objects are detected by applying the inter-frame difference method on the low frequency parts of two consecutive frames, and then SIFT feature is used for object representation and identification due to its invariant properties. Experimental results demonstrate that the proposed strategy improves the tracking performance by comparing with the classical mean shift method, and it is also shown that the proposed algorithm can be also applied in multiple objects tracking in real scenarios.
Proceedings 2014 IEEE International Conference on Security, Pattern Analysis, and Cybernetics (SPAC) | 2014
Daiming Zhang; Bin Fang; Weibin Yang; Xiaosong Luo; Yuan Yan Tang
Vision-based road signs detection and recognition has been widely used in intelligent robotics and automotive autonomous driving technology. Currently, one-time calibration of inverse perspective mapping (IPM) parameters is employed to eliminate the effect of perspective mapping, but it is not robust to the uphill and downhill road. We propose an automatic inverse perspective mapping method based on vanishing point, which is adaptive to the uphill and downhill road even with slight rotation of the main road direction. The proposed algorithm is composed of the following three steps: detecting the vanishing point, calculating the pitch and yaw angles and adopting inverse perspective mapping to obtain the “birds eye view” image. Experimental results show that the adaptability of our inverse perspective mapping framework is comparable to existing state-of-the-art methods, which is conducive to the subsequent detection and recognition of road signs.
international conference on intelligent transportation systems | 2014
Weibin Yang; Xiaosong Luo; Bin Fang; Daiming Zhang; Yuan Yan Tang
Most existing approaches detect the vanishing point by voting the local dominant texture orientation at each pixel. However, when it is hard to distinguish natural road clues (items for finding the vanishing point) from background noises (e.g. stones, grasses) in complex scenes, they may suffer deteriorated accuracy and efficiency. In this paper, we introduce a novel vanishing point detection algorithm with the proposed Weber Orientation Descriptor (WOD). We first employ the differential excitation component of WOD to extract reliable road clue regions from a complex background, and then adopt the orientation component of WOD and our proposed line-voting scheme (LVS) to locate the vanishing point. Experimental results on the benchmark dataset reveal a step forward in detection performance against the state-of-the-art vanishing point detection methods.
systems man and cybernetics | 2018
Weibin Yang; Bin Fang; Yuan Yan Tang
Fast and accurate visual scene understanding in autonomous vehicles is necessary but still very challenging. An autonomous vehicle must be taught to read the road like a human driver for better controlling the vehicle, so it is important to efficiently detect the road area and road markings. In this paper, we mainly focus on the vanishing point detection and its application in inverse perspective mapping (IPM) for road marking understanding. We first propose a fast and accurate vanishing point detection method for various types of roads, by adopting and improving Weber local descriptor to obtain salient representative texture and orientation information of the road area, and then voting for the dominant vanishing point with a simple line-voting scheme. Experimental results demonstrate that the proposed vanishing point detection approach gains a better performance than some state-of-the-art methods in terms of accuracy and computation time. Furthermore, we introduce the detected vanishing point into the IPM algorithm in the structured road environment, since some important calibration parameters can be automatically calculated by the vanishing point, especially on the rough road. Experiments also show that our proposed vanishing point-based IPM method is adaptive and accurate, which is conducive to the subsequent road marking detection and recognition.
international conference on machine learning and cybernetics | 2016
Peng Zhao; Bin Fang; Weibin Yang
Since lane information is necessary for road security improvement in unmanned vehicle systems, the detection of the lane is an important task. Most existing approaches are particularly designed for specific road scenario (such as the highway, urban roads). However, the detection precision may be deteriorated if the lane markings are blurred and vestigial. Similarly, the reflection and smudges on the road surface also influence the detection result. In this paper, we introduce a novel robust lane detection algorithm based on the differential excitation. Firstly, we extract the region of interest (ROI) by considering human visual attention. Then we enhance salient texture information and remove the noise effectively through differential excitation. The binary image is obtained using Webers law. Furthermore, we select the points that satisfy the proposed rules as voting points. Finally, the lane markings are detected and extracted by Hough transform accordingly. Experimental results on an open database indicate that the proposed method outperforms the classical Sebdanis approach and the Lows approach.
international conference on wavelet analysis and pattern recognition | 2012
Weibin Yang; Bin Fang; Zhaowei Shang; Bo Lin
Image saliency attempts to describe the most conspicuous part in an input image by mimicking human visual selective attention mechanism. Naturally, it could be adopted for improving object recognition. To demonstrate the effectiveness of saliency in object recognition, this paper proposes a salient hierarchical model. First, the traditional saliency model is modified for more robust saliency estimation. Second, the visual saliency detection method is combined with the Hierarchical Maximization model to provide more useful visual information for classification. Experimental results show that the improved saliency model extracts more accurate conspicuity, and the proposed salient hierarchical model outperforms Hierarchical Maximization model.