Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Chenglu Wen is active.

Publication


Featured researches published by Chenglu Wen.


IEEE Transactions on Geoscience and Remote Sensing | 2016

Vehicle Detection in High-Resolution Aerial Images via Sparse Representation and Superpixels

Ziyi Chen; Cheng Wang; Chenglu Wen; Xiuhua Teng; Yiping Chen; Haiyan Guan; Huan Luo; Liujuan Cao; Jonathan Li

This paper presents a study of vehicle detection from high-resolution aerial images. In this paper, a superpixel segmentation method designed for aerial images is proposed to control the segmentation with a low breakage rate. To make the training and detection more efficient, we extract meaningful patches based on the centers of the segmented superpixels. After the segmentation, through a training sample selection iteration strategy that is based on the sparse representation, we obtain a complete and small training subset from the original entire training set. With the selected training subset, we obtain a dictionary with high discrimination ability for vehicle detection. During training and detection, the grids of histogram of oriented gradient descriptor are used for feature extraction. To further improve the training and detection efficiency, a method is proposed for the defined main direction estimation of each patch. By rotating each patch to its main direction, we give the patches consistent directions. Comprehensive analyses and comparisons on two data sets illustrate the satisfactory performance of the proposed algorithm.


IEEE Geoscience and Remote Sensing Letters | 2014

Object Detection in Terrestrial Laser Scanning Point Clouds Based on Hough Forest

Hanyun Wang; Cheng Wang; Huan Luo; Peng Li; Ming Cheng; Chenglu Wen; Jonathan Li

This letter presents a novel rotation-invariant method for object detection from terrestrial 3-D laser scanning point clouds acquired in complex urban environments. We utilize the Implicit Shape Model to describe object categories, and extend the Hough Forest framework for object detection in 3-D point clouds. A 3-D local patch is described by structure and reflectance features and then mapped to the probabilistic vote about the possible location of the object center. Objects are detected at the peak points in the 3-D Hough voting space. To deal with the arbitrary azimuths of objects in real world, circular voting strategy is introduced by rotating the offset vector. To deal with the interference of adjacent objects, distance weighted voting is proposed. Large-scale real-world point cloud data collected by terrestrial mobile laser scanning systems are used to evaluate the performance. Experimental results demonstrate that the proposed method outperforms the state-of-the-art 3-D object detection methods.


IEEE Transactions on Intelligent Transportation Systems | 2016

Spatial-Related Traffic Sign Inspection for Inventory Purposes Using Mobile Laser Scanning Data

Chenglu Wen; Jonathan Li; Huan Luo; Yongtao Yu; Zhipeng Cai; Hanyun Wang; Cheng Wang

This paper presents a spatial-related traffic sign inspection process for sign type, position, and placement using mobile laser scanning (MLS) data acquired by a RIEGL VMX-450 system and presents its potential for traffic sign inventory applications. First, the paper describes an algorithm for traffic sign detection in complicated road scenes based on the retroreflectivity properties of traffic signs in MLS point clouds. Then, a point cloud-to-image registration process is proposed to project the traffic sign point clouds onto a 2-D image plane. Third, based on the extracted traffic sign points, we propose a traffic sign position and placement inspection process by creating geospatial relations between the traffic signs and road environment. For further inventory applications, we acquire several spatial-related inventory measurements. Finally, a traffic sign recognition process is conducted to assign sign type. With the acquired sign type, position, and placement data, a spatial-associated sign network is built. Experimental results indicate satisfactory performance of the proposed detection, recognition, position, and placement inspection algorithms. The experimental results also prove the potential of MLS data for automatic traffic sign inventory applications.


IEEE Transactions on Intelligent Transportation Systems | 2016

Patch-Based Semantic Labeling of Road Scene Using Colorized Mobile LiDAR Point Clouds

Huan Luo; Cheng Wang; Chenglu Wen; Zhipeng Cai; Ziyi Chen; Hanyun Wang; Yongtao Yu; Jonathan Li

Semantic labeling of road scenes using colorized mobile LiDAR point clouds is of great significance in a variety of applications, particularly intelligent transportation systems. However, many challenges, such as incompleteness of objects caused by occlusion, overlapping between neighboring objects, interclass local similarities, and computational burden brought by a huge number of points, make it an ongoing open research area. In this paper, we propose a novel patch-based framework for labeling road scenes of colorized mobile LiDAR point clouds. In the proposed framework, first, three-dimensional (3-D) patches extracted from point clouds are used to construct a 3-D patch-based match graph structure (3D-PMG), which transfers category labels from labeled to unlabeled point cloud road scenes efficiently. Then, to rectify the transferring errors caused by local patch similarities in different categories, contextual information among 3-D patches is exploited by combining 3D-PMG with Markov random fields. In the experiments, the proposed framework is validated on colorized mobile LiDAR point clouds acquired by the RIEGL VMX-450 mobile LiDAR system. Comparative experiments show the superior performance of the proposed framework for accurate semantic labeling of road scenes.


IEEE Geoscience and Remote Sensing Letters | 2015

Road Boundaries Detection Based on Local Normal Saliency From Mobile Laser Scanning Data

Hanyun Wang; Huan Luo; Chenglu Wen; Jun Cheng; Peng Li; Yiping Chen; Cheng Wang; Jonathan Li

The accurate extraction of roads is a prerequisite for the automatic extraction of other road features. This letter describes a method for detecting road boundaries from mobile laser scanning (MLS) point clouds in an urban environment. The key idea of our method is directly constructing a saliency map on 3-D unorganized point clouds to extract road boundaries. The method consists of four major steps, i.e., road partition with the assistance of the vehicle trajectory, salient map construction and salient points extraction, curb detection and curb lowest points extraction, and road boundaries fitting. The performance of the proposed method is evaluated on the point clouds of an urban scene collected by a RIEGL VMX-450 MLS system. The completeness, correctness, and quality of the extracted road boundaries are 95.41%, 99.35%, and 94.81%, respectively. Experimental results demonstrate that our method is feasible for detecting road boundaries in MLS point clouds.


IEEE Geoscience and Remote Sensing Letters | 2014

Three-Dimensional Indoor Mobile Mapping With Fusion of Two-Dimensional Laser Scanner and RGB-D Camera Data

Chenglu Wen; Ling Qin; Qingyuan Zhu; Cheng Wang; Jonathan Li

Three-dimensional mobile mapping in indoor environment, mostly global navigation satellite system-denied space, is to consecutively align the frames to build a global 3-D map of an indoor environment. One of the major difficulties of the current solutions is the failure at the insufficient overlapping between the frames, which is the reality of a lack of correspondences between the frames. To overcome this problem, a 3-D indoor mobile mapping system that integrates a 2-D laser scanner, and an RGB-Depth camera is presented in this letter. In this system, a fusion-iterative closest point (ICP) method, which combines the 2-D mobile platform pose from a Rao-Blackwellized particle filter estimation, an ICP, and a generalized-ICP method, is proposed for the consecutive frame alignment. Fusion-ICP achieves effective frame alignment, particularly in solving the insufficient overlapping frame alignment problem. Comparative experiments were conducted to evaluate the mapping system. The experimental results demonstrate the effectiveness and efficiency of our system for 3-D indoor mobile mapping.


IEEE Transactions on Intelligent Transportation Systems | 2016

Vehicle Detection in High-Resolution Aerial Images Based on Fast Sparse Representation Classification and Multiorder Feature

Ziyi Chen; Cheng Wang; Huan Luo; Hanyun Wang; Yiping Chen; Chenglu Wen; Yongtao Yu; Liujuan Cao; Jonathan Li

This paper presents an algorithm for vehicle detection in high-resolution aerial images through a fast sparse representation classification method and a multiorder feature descriptor that contains information of texture, color, and high-order context. To speed up computation of sparse representation, a set of small dictionaries, instead of a large dictionary containing all training items, is used for classification. To extract the context information of a patch, we proposed a high-order context information extraction method based on the proposed fast sparse representation classification method. To effectively extract the color information, the RGB color space is transformed into color name space. Then, the color name information is embedded into the grids of histogram of oriented gradient feature to represent the low-order feature of vehicles. By combining low- and high-order features together, a multiorder feature is used to describe vehicles. We also proposed a sample selection strategy based on our fast sparse representation classification method to construct a complete training subset. Finally, a set of dictionaries, which are trained by the multiorder features of the selected training subset, is used to detect vehicles based on superpixel segmentation results of aerial images. Experimental results illustrate the satisfactory performance of our algorithm.


Neurocomputing | 2016

Local quality assessment of point clouds for indoor mobile mapping

Fangfang Huang; Chenglu Wen; Huan Luo; Ming Cheng; Cheng Wang; Jonathan Li

The quality of point clouds obtained by RGB-D camera-based indoor mobile mapping can be limited by local degradation because of complex scenarios such as sensor characteristics, partial occlusions, cluttered backgrounds, and complex illumination conditions. This paper presents a machine learning framework to assess the local quality of indoor mobile mapping point cloud data. In our proposed framework, a point cloud dataset with multiple kinds of quality problems is first created by manual annotation and degradation simulation. Then, feature extraction methods based on 3D patches are treated as operating units to conduct quality assessment in local regions. Also, a feature selection algorithm is deployed to obtain the essential components of feature sets that are used to effectively represent local degradation. Finally, a semi-supervised method is introduced to classify quality types of point clouds. Comparative experiments demonstrate that the proposed framework obtained promising quality assessment results with limited labeled data and a large amount of unlabeled data. A point cloud dataset with multiple kinds of quality problems is created.The main causes of point cloud data degradation in indoor mobile mapping is analyzed.A novel semi-supervised framework is proposed for quality assessment of indoor mobile mapping point cloud data.


IEEE Transactions on Intelligent Transportation Systems | 2017

Rapid Localization and Extraction of Street Light Poles in Mobile LiDAR Point Clouds: A Supervoxel-Based Approach

Fan Wu; Chenglu Wen; Yulan Guo; Jingjing Wang; Yongtao Yu; Cheng Wang; Jonathan Li

This paper presents a supervoxel-based approach for automated localization and extraction of street light poles in point clouds acquired by a mobile LiDAR system. The method consists of five steps: preprocessing, localization, segmentation, feature extraction, and classification. First, the raw point clouds are divided into segments along the trajectory, the ground points are removed, and the remaining points are segmented into supervoxels. Then, a robust localization method is proposed to accurately identify the pole-like objects. Next, a localization-guided segmentation method is proposed to obtain pole-like objects. Subsequently, the pole features are classified using the support vector machine and random forests. The proposed approach was evaluated on three datasets with 1,055 street light poles and 701 million points. Experimental results show that our localization method achieved an average recall value of 98.8%. A comparative study proved that our method is more robust and efficient than other existing methods for localization and extraction of street light poles.


IEEE Transactions on Intelligent Transportation Systems | 2016

Bag of Contextual-Visual Words for Road Scene Object Detection From Mobile Laser Scanning Data

Yongtao Yu; Jonathan Li; Haiyan Guan; Chunxiang Wang; Chenglu Wen

This paper proposes a novel algorithm for detecting road scene objects (e.g., light poles, traffic signposts, and cars) from 3-D mobile-laser-scanning point cloud data for transportation-related applications. To describe local abstract features of point cloud objects, a contextual visual vocabulary is generated by integrating spatial contextual information of feature regions. Objects of interest are detected based on the similarity measures of the bag of contextual-visual words between the query object and the segmented semantic objects. Quantitative evaluations on two selected data sets show that the proposed algorithm achieves an average recall, precision, quality, and F-score of 0.949, 0.970, 0.922, and 0.959, respectively, in detecting light poles, traffic signposts, and cars. Comparative studies demonstrate the superior performance of the proposed algorithm over other existing methods.

Collaboration


Dive into the Chenglu Wen's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hanyun Wang

National University of Defense Technology

View shared research outputs
Top Co-Authors

Avatar

Haiyan Guan

Nanjing University of Information Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge