Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tongtong Chen is active.

Publication


Featured researches published by Tongtong Chen.


Journal of Intelligent and Robotic Systems | 2014

Gaussian-Process-Based Real-Time Ground Segmentation for Autonomous Land Vehicles

Tongtong Chen; Bin Dai; Ruili Wang; Daxue Liu

Ground segmentation is a key component for Autonomous Land Vehicle (ALV) navigation in an outdoor environment. This paper presents a novel algorithm for real-time segmenting three-dimensional scans of various terrains. An individual terrain scan is represented as a circular polar grid map that is divided into a number of segments. A one-dimensional Gaussian Process (GP) regression with a non-stationary covariance function is used to distinguish the ground points or obstacles in each segment. The proposed approach splits a large-scale ground segmentation problem into many simple GP regression problems with lower complexity, and can then get a real-time performance while yielding acceptable ground segmentation results. In order to verify the effectiveness of our approach, experiments have been carried out both on a public dataset and the data collected by our own ALV in different outdoor scenes. Our approach has been compared with two previous ground segmentation techniques. The results show that our approach can get a better trade-off between computational time and accuracy. Thus, it can lead to successive object classification and local path planning in real time. Our approach has been successfully applied to our ALV, which won the championship in the 2011 Chinese Future Challenge in the city of Ordos.


international conference on image and graphics | 2011

LIDAR-based Long Range Road Intersection Detection

Tongtong Chen; Bin Dai; Daxue Liu; Zhao Liu

Long range road intersection detection is crucial for localization and local path planning of autonomous vehicle in urban environments. In this paper, a new long-range road intersection detection approach for autonomous vehicle equipped with 3D LIDAR is presented. The approach first analyzes the admissible space in front of the autonomous vehicle, and then a virtual 3D LIDAR is placed in the admissible space 20 meters away from the vehicle. Finally the beam model of range finders and an improved toe-finding algorithm for virtual 3D LIDAR is used to find the road intersection. Experiments are carried out at the autonomous vehicle in campus, and results show the promising performance of the presented method.


intelligent vehicles symposium | 2014

Performance of global descriptors for velodyne-based urban object recognition

Tongtong Chen; Bin Dai; Daxue Liu; Jinze Song

Object Recognition is an essential component for Autonomous Land Vehicle (ALV) navigation in urban environments. This paper presents a thorough evaluation of the performance of some state of the art global descriptors on the public Sydney Urban Objects Dataset1, which was collected in the Central Business District of Sydney. These descriptors are Bounding Box descriptor, Histogram of Local Point Level descriptor, Hierarchy descriptor, and Spin Image (SI). We also propose a novel Global Fourier Histogram (GFH) descriptor. Experimental results on the public data set show that GFH descriptor turns out to be one of the best global descriptors for the object recognition in urban environments, and the results on the data collected by our own ALV in urban environments also demonstrate its usefulness.


ieee intelligent vehicles symposium | 2015

Velodyne-based curb detection up to 50 meters away

Tongtong Chen; Bin Dai; Daxue Liu; Jinze Song; Zhao Liu

Long range curb detection is crucial for an Autonomous Land Vehicle (ALV) navigation in urban environments. This paper presents a novel curb detection algorithm which can detect the curbs up to 50 meters away with Velodyne LIDAR. Instead of building a Digital Elevation Map (DEM) and utilizing geometric features (like normal direction) to extract candidate curb points, we take each scan line of Velodyne LIDAR as a processing unite directly. Some feature points, which are extracted from individual scan lines, are selected as the initial curb points by the distance criterion and Hough Transform (HT). Eventually, iterative Gaussian Process Regression (GPR), which utilizes the above initial curb points as the initial seeds, is exploited to represent both the curved and straight-line curb model. In order to verify the effectiveness of our algorithm quantitatively, 2934 Velodyne scans are collected in various urban scenes with our ALV, and 566 of them are labelled manually1. Our algorithm is also compared with two other curb detection techniques. The experimental results on the dataset show promising performance.


international congress on image and signal processing | 2013

Vehicle detection and tracking with 2D laser range finders

Zhao Liu; Daxue Liu; Tongtong Chen

Dynamic environment perception is an important component for unmanned ground vehicle navigation. In this paper, we focus on the vehicle detection and tracking with two low cost 2D laser range finders. Firstly, a foreground segmentation method is proposed based on a combination of the time cue and motion cue. The modified laser beam model is proposed for the tilted 2D laser range finder in the foreground segmentation. Then, the vehicle detection and preliminary tracking is realized by the modified measurement model. Finally, a new precise tracking method is proposed to refine the preliminary tracking results. Qualitative and quantitative experiments are carried out in real dynamic environment, and the result shows that the proposed method can detect and track the vehicle accurately.


international conference on intelligent transportation systems | 2015

Likelihood-Field-Model-Based Vehicle Pose Estimation with Velodyne

Tongtong Chen; Bin Dai; Daxue Liu; Hao Fu; Jinze Song; Chongyang Wei

Dynamic vehicle tracking is an important module for Autonomous Land Vehicle (ALV) navigation in outdoor environments. The key step for a successful tracker is to accurately estimate the pose of the vehicle. In this paper, we present a novel real-time vehicle pose estimation algorithm based on the likelihood field model built on the Velodyne LIDAR data. The likelihood field model is adopted to weight the particles, which represent the potential poses, drawn around the location of the target vehicle. Importance sampling which is speeded up with the Scaling Series algorithm, is then exploited to choose the best particle as the final vehicles pose. The performance of the algorithm is validated on the data collected by our own ALV in various urban environments.


International Journal of Advanced Robotic Systems | 2017

Gaussian process regression-based robust free space detection for autonomous vehicle by 3-D point cloud and 2-D appearance information fusion:

Zhipeng Xiao; Bin Dai; Hongdong Li; Tao Wu; Xin Xu; Yujun Zeng; Tongtong Chen

Free space detection is crucial to autonomous vehicles while existing works are not entirely satisfactory. As cameras have many advantages on environment perception, a stereo vision-based robust free space detection method is proposed which mainly depends on geometry information and Gaussian process regression. In this work, in order to improve the performance by exploiting multiple source information, we apply Bayesian framework and conditional random field inference to fuse the multimodal information including 2-D image and 3-D point geometric information. Particularly, a Bayesian framework is used for multiple feature fusion to provide a normalized and flexible output. Gaussian process regression is used to automatically and incrementally regress the data, resulting enhanced performance. Finally, conditional random field with color and geometry constrains is applied to make the result more robust. In order to evaluate the proposed method, quantitative experiments on popular KITTI-road data set and qualitative experiments on our own campus data set are tested. The results show satisfactory and inspiring performance compared to the outstanding works and even are competitive to some relevant Lidar-based methods.


international conference on intelligent human-machine systems and cybernetics | 2015

Likelihood-Field-Model-Based Dynamic Vehicle Detection with Velodyne

Tongtong Chen; Bin Dai; Daxue Liu; Hao Fu; Jinze Song

Dynamic vehicle detection is an important module for Autonomous Land Vehicle (ALV) navigation in outdoor environments. In this paper, we present a novel dynamic vehicle detection algorithm based on the likelihood field model for an ALV equipped with a Velodyne LIDAR. An improved 2D virtual scan is utilized to detect the dynamic objects with the scan differencing operation. For every dynamic object, a vehicle is fitted with the likelihood field model, and the motion evidence and motion consistence of the fitted vehicle are exploited to classify the dynamic object into the vehicle or not. The performance of the algorithm is validated on the data collected by our ALV in various environments.


international conference on image and graphics | 2013

Curb Detection Using 2D Range Data in a Campus Environment

Zhao Liu; Daxue Liu; Tongtong Chen; Chongyang Wei

Curb detection is an important research topic for unmanned ground vehicle (UGV) navigation. In this paper, a new curb detection method is proposed using a 2D laser range finder in a campus environment. Firstly, a local Digital Elevation Map (DEM) is built with 2D sequential laser range finder data and vehicle state information. Then, the curb candidate points are extracted considering the moving direction of the vehicle in the local DEM. Finally, the 1D Gaussian process regression is firstly used for curb detection, and the initial training curb data are obtained online by the extracted straight curbs. The proposed method has been verified in different scenes with the real vehicle platform, and it can detect the straight and curved curbs robustly in a campus environment.


IEEE Transactions on Intelligent Transportation Systems | 2016

Likelihood-Field-Model-Based Dynamic Vehicle Detection and Tracking for Self-Driving

Tongtong Chen; Ruili Wang; Bin Dai; Daxue Liu; Jinze Song

Dynamic vehicle detection and tracking is crucial for self-driving in urban environments. The main problem of the previous beam-model-based algorithms is that they cannot detect and track dynamic vehicles that are occluded by other objects. In this paper, we develop a novel dynamic vehicle detection and tracking algorithm to solve this problem for our autonomous land vehicle (ALV), which is equipped with a Velodyne LIDAR and a GPS-aid inertial navigation system. For detection, our improved two-dimensional virtual scan is presented to detect the potential dynamic vehicles with a scan differencing operation. Then, for each potential dynamic vehicle, a novel likelihood-field-based vehicle measurement model is proposed to weight its possible poses. Finally, our newly modified scaling series algorithm and the importance sampling technique are adopted to estimate the initial pose and the corresponding velocity for each vehicle, respectively. The scaling series algorithm coupled with a Bayesian filter (SSBF) was previously used to handle the tactile localization problem in static background scenes. For tracking dynamic vehicles, we improve the SSBF by adding the ego-motion compensation so that the improved algorithm is able to update the pose and velocity for each vehicle in dynamic background scenes. Both the quantitative and qualitative experimental results validate the performance of our dynamic vehicle detection and tracking algorithm on the KITTI datasets and the Velodyne data collected by our ALV in dynamic urban environments.

Collaboration


Dive into the Tongtong Chen's collaboration.

Top Co-Authors

Avatar

Bin Dai

National University of Defense Technology

View shared research outputs
Top Co-Authors

Avatar

Daxue Liu

National University of Defense Technology

View shared research outputs
Top Co-Authors

Avatar

Jinze Song

National University of Defense Technology

View shared research outputs
Top Co-Authors

Avatar

Zhao Liu

National University of Defense Technology

View shared research outputs
Top Co-Authors

Avatar

Hao Fu

National University of Defense Technology

View shared research outputs
Top Co-Authors

Avatar

Tao Wu

National University of Defense Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chongyang Wei

National University of Defense Technology

View shared research outputs
Top Co-Authors

Avatar

Zhipeng Xiao

National University of Defense Technology

View shared research outputs
Top Co-Authors

Avatar

Li Zhou

National University of Defense Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge