Q.M.J. Wu
University of Windsor
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Q.M.J. Wu.
IEEE Transactions on Multimedia | 2007
Wei Zhang; Xiang Zhong Fang; Xiaokang Yang; Q.M.J. Wu
Moving objects segmentation plays a very important role in real-time image analysis. However, as one of the common parts in the natural scenes, shadows severely interfere with the accuracy of moving objects detection in video surveillance. In this paper, we present a novel method for moving cast shadows detection. Based on the analysis of the physical model of moving shadows, we prove that the ratio edge is illumination invariant. The distribution of the ratio edge is discussed and a significance test is performed to classify each moving pixel into foreground object or moving shadow. Intensity constraint and geometric heuristics are imposed to further improve the performance. Experiments on various typical scenes exhibit the robustness of the proposed method. Extensively quantitative evaluation and comparison demonstrate that the proposed method significantly outperforms state-of-the-art methods.
IEEE Transactions on Intelligent Transportation Systems | 2008
Weigang Zhang; Q.M.J. Wu; Xiaokang Yang; Xiangzhong Fang
This paper presents a multilevel framework to detect and handle vehicle occlusion. The proposed framework consists of the intraframe, interframe, and tracking levels. On the intraframe level, occlusion is detected by evaluating the compactness ratio and interior distance ratio of vehicles, and the detected occlusion is handled by removing a ldquocutting regionrdquo of the occluded vehicles. On the interframe level, occlusion is detected by performing subtractive clustering on the motion vectors of vehicles, and the occluded vehicles are separated according to the binary classification of motion vectors. On the tracking level, occlusion layer images are adaptively constructed and maintained, and the detected vehicles are tracked in both the captured images and the occlusion layer images by performing a bidirectional occlusion reasoning algorithm. The proposed intraframe, interframe, and tracking levels are sequentially implemented in our framework. Experiments on various typical scenes exhibit the effectiveness of the proposed framework. Quantitative evaluation and comparison demonstrate that the proposed method outperforms state-of-the-art methods.
international conference on pattern recognition | 2008
T. Mandal; Q.M.J. Wu
This paper identifies a novel feature space to address the problem of human face recognition from still images. This is based on the PCA space of the features extracted by a new multiresolution analysis tool called Fast Discrete Curvelet Transform. Curvelet Transform has better directional and edge representation abilities than widely used wavelet transform. Inspired by these attractive attributes of curvelets, we introduce the idea of decomposing images into its curvelet subbands and applying PCA (Principal Component Analysis) on the selected subbands in order to create a representative feature set. Experiments have been designed for both single and multiple training images per subject. A comparative study with wavelet-based and traditional PCA techniques is also presented. High accuracy rate achieved by the proposed method for two well-known databases indicates the potential of this curvelet based feature extraction method.
IEEE Transactions on Intelligent Transportation Systems | 2012
Wei Zhang; Q.M.J. Wu; Guanghui Wang; Xinge You
Traffic surveillance is an important topic in computer vision and intelligent transportation systems and has intensively been studied in the past decades. However, most of the state-of-the-art methods concentrate on daytime traffic monitoring. In this paper, we propose a nighttime traffic surveillance system, which consists of headlight detection, headlight tracking and pairing, and camera calibration and vehicle speed estimation. First, a vehicle headlight is detected using a reflection intensity map and a reflection suppressed map based on the analysis of the light attenuation model. Second, the headlight is tracked and paired by utilizing a simple yet effective bidirectional reasoning algorithm. Finally, the trajectories of the vehicles headlight are employed to calibrate the surveillance camera and estimate the vehicles speed. Experimental results on typical sequences show that the proposed method can robustly detect, track, and pair the vehicle headlight in night scenes. Extensive quantitative evaluations and related comparisons demonstrate that the proposed method outperforms state-of-the-art methods.
canadian conference on computer and robot vision | 2008
E. Parvizi; Q.M.J. Wu
In this paper, we propose a multiple object tracking algorithm in three-dimensional (3D) domain based on a state of the art, adaptive range segmentation method. The performance of segmentation processes has an important impact on the achieved tracking results. Furthermore, segmentation methods which perform best on intensity images will not necessarily achieve promising results when applied on depth images from a time-of-flight sensor. Here, the employed unique segmentation promises a real-time tracking analysis, having a significantly high preprocessing efficiency. Our experiments confirm the robustness, as well as efficiency of the proposed approach.
computer vision and pattern recognition | 2008
B. Khaleghi; S. Ahuja; Q.M.J. Wu
In this paper we describe a fully integrated, real-time, miniaturized embedded stereo vision system (MESVS-II), which fits within 5times5cm and consumes very low power. This is a significant improvement over the original MESVS-I system in terms of performance, quality and accuracy of results. MESVS-II running at 600MHz per core, is capable of operating at up to 20 fps, which is twice as fast as MESVS-I, due to the efficient implementation of stereo-vision algorithms, improved memory and data management, in-place processing scheme, code optimization, and the pipelined-programming model that takes advantage of the dual-core architecture of the embedded processor. The firmware incorporates sub-sampling, rectification, pre-processing, matching, LRC (Left/Right Consistency) check and post-processing. As demonstrated by our experimental results, we have also enhanced the robustness of the stereo-matching engine to radiometric variations by choosing census transform over rank transform.
international conference on image analysis and recognition | 2009
Rashid Minhas; Abdul Adeel Mohammed; Q.M.J. Wu; Maher A. Sid-Ahmed
The technique utilized to retrieve spatial information from a sequence of images with varying focus plane is termed as shape from focus (SFF). Traditional SFF techniques perform inadequately due to their inability to deal with images that contain high contrast variations between different regions, shadows, defocused points, noise, and oriented edges. A novel technique to compute SFF and depth map is proposed using steerable filters. Steerable filters, designed in quadrature pairs for better control over phase and orientation, have successfully been applied in many image analysis and pattern recognition schemes. Steerable filters represent architecture to synthesize filters of arbitrary orientation using linear combination of basis filters. Such synthesis is used to determine analytically the filter output as a function of orientation. SFF is computed using steerable filters on variety of image sequences. Quantitative and qualitative performance analyses validate enhanced performance of our proposed scheme.
International Journal of Computer Vision | 2010
Guanghui Wang; Q.M.J. Wu
This paper addresses the problem of factorization-based 3D reconstruction from uncalibrated image sequences. Previous studies on structure and motion factorization are either based on simplified affine assumption or general perspective projection. The affine approximation is widely adopted due to its simplicity, whereas the extension to perspective model suffers from recovering projective depths. To fill the gap between simplicity of affine and accuracy of perspective model, we propose a quasi-perspective projection model for structure and motion recovery of rigid and nonrigid objects based on factorization framework. The novelty and contribution of this paper are as follows. Firstly, under the assumption that the camera is far away from the object with small lateral rotations, we prove that the imaging process can be modeled by quasi-perspective projection, which is more accurate than affine model from both geometrical error analysis and experimental studies. Secondly, we apply the model to establish a framework of rigid and nonrigid factorization under quasi-perspective assumption. Finally, we propose an Extended Cholesky Decomposition to recover the rotation part of the Euclidean upgrading matrix. We also prove that the last column of the upgrading matrix corresponds to a global scale and translation of the camera thus may be set freely. The proposed method is validated and evaluated extensively on synthetic and real image sequences and improved results over existing schemes are observed.
computer vision and pattern recognition | 2008
Guanghui Wang; Q.M.J. Wu; Guoqiang Sun
The paper addresses the problem of factorization-based 3D reconstruction from uncalibrated image sequences. We propose a quasi-perspective projection model and apply the model to structure and motion recovery of rigid and nonrigid objects based on factorization of tracking matrix. The novelty and contribution of the paper lies in three aspects. First, under the assumption that the camera is far away from the object with small rotations, we propose and prove that the imaging process can be modeled by quasi-perspective projection. The model is more accurate than affine since the projective depths are implicitly embedded. Second, we apply the model to the factorization algorithm and establish the framework of rigid and nonrigid factorization under quasi-perspective assumption. Third, we propose a new and robust method to recover the transformation matrix that upgrades the factorization to the Euclidean space. The proposed method is validated and evaluated on synthetic and real image sequences and good improvements over existing solutions are observed.
canadian conference on computer and robot vision | 2008
B. Khaleghi; S. Ahuja; Q.M.J. Wu
We have developed a fully integrated, miniaturized embedded stereo vision system (MESVS-I) which fits into a tiny package of 5 times 5 cm and consumes very low power (700 mA @ 3.3 V). The system consists of two small profile CMOS cameras, and a power efficient, dual-core embedded media processor, running at 600 MHz per core. The stereo-matching engine performs sub-sampling, rectification, pre-processing using rank transform, correlation-based matching using three levels of recursion, L/R consistency check and post-processing. We have proposed a novel and efficient post-processing algorithm that removes outliers due to low-texture regions and depth discontinuities by combining the contributions from the variance map of the rectified image, disparity map, and the variance map of the disparity map. To further enhance the performance of the system, we have implemented a two staged pipelined-processing scheme that takes advantage of the dual-core architecture of the embedded processor, thereby achieving a processing speed of around 10 fps for disparity maps.