Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Daxue Liu is active.

Publication


Featured researches published by Daxue Liu.


Journal of Intelligent and Robotic Systems | 2014

Gaussian-Process-Based Real-Time Ground Segmentation for Autonomous Land Vehicles

Tongtong Chen; Bin Dai; Ruili Wang; Daxue Liu

Ground segmentation is a key component for Autonomous Land Vehicle (ALV) navigation in an outdoor environment. This paper presents a novel algorithm for real-time segmenting three-dimensional scans of various terrains. An individual terrain scan is represented as a circular polar grid map that is divided into a number of segments. A one-dimensional Gaussian Process (GP) regression with a non-stationary covariance function is used to distinguish the ground points or obstacles in each segment. The proposed approach splits a large-scale ground segmentation problem into many simple GP regression problems with lower complexity, and can then get a real-time performance while yielding acceptable ground segmentation results. In order to verify the effectiveness of our approach, experiments have been carried out both on a public dataset and the data collected by our own ALV in different outdoor scenes. Our approach has been compared with two previous ground segmentation techniques. The results show that our approach can get a better trade-off between computational time and accuracy. Thus, it can lead to successive object classification and local path planning in real time. Our approach has been successfully applied to our ALV, which won the championship in the 2011 Chinese Future Challenge in the city of Ordos.


ieee intelligent vehicles symposium | 2015

CRF based road detection with multi-sensor fusion

Liang Xiao; Bin Dai; Daxue Liu; Tingbo Hu; Tao Wu

In this paper, we propose to fuse the LIDAR and monocular image in the framework of conditional random field to detect the road robustly in challenging scenarios. LIDAR points are aligned with pixels in image by cross calibration. Then boosted decision tree based classifiers are trained for image and point cloud respectively. The scores of the two kinds of classifiers are treated as the unary potentials of the corresponding pixel nodes of the random field. The fused conditional random field can be solved efficiently with graph cut. Extensive experiments tested on KITTI-Road benchmark show that our method reaches the state-of-the-art.


IEEE Transactions on Control Systems and Technology | 2014

Self-Learning Cruise Control Using Kernel-Based Least Squares Policy Iteration

Jian Wang; Xin Xu; Daxue Liu; Zhenping Sun; Qingyang Chen

This paper presents a novel learning-based cruise controller for autonomous land vehicles (ALVs) with unknown dynamics and external disturbances. The learning controller consists of a time-varying proportional-integral (PI) module and an actor-critic learning control module with kernel machines. The learning objective for the cruise control is to make the vehicles longitudinal velocity follow a smoothed spline-based speed profile with the smallest possible errors. The parameters in the PI module are adaptively tuned based on the vehicles state and the action policy of the learning control module. Based on the state transition data of the vehicle controlled by various initial policies, the action policy of the learning control module is optimized by kernel-based least squares policy iteration (KLSPI) in an offline way. The effectiveness of the proposed controller was tested on an ALV platform during long-distance driving in urban traffic and autonomous driving on off-road terrain. The experimental results of the cruise control show that the learning control method can realize data-driven controller design and optimization based on KLSPI and that the controllers performance is adaptive to different road conditions.


international conference on image and graphics | 2011

LIDAR-based Long Range Road Intersection Detection

Tongtong Chen; Bin Dai; Daxue Liu; Zhao Liu

Long range road intersection detection is crucial for localization and local path planning of autonomous vehicle in urban environments. In this paper, a new long-range road intersection detection approach for autonomous vehicle equipped with 3D LIDAR is presented. The approach first analyzes the admissible space in front of the autonomous vehicle, and then a virtual 3D LIDAR is placed in the admissible space 20 meters away from the vehicle. Finally the beam model of range finders and an improved toe-finding algorithm for virtual 3D LIDAR is used to find the road intersection. Experiments are carried out at the autonomous vehicle in campus, and results show the promising performance of the presented method.


International Journal of Advanced Robotic Systems | 2016

Monocular Road Detection Using Structured Random Forest

Liang Xiao; Bin Dai; Daxue Liu; Dawei Zhao; Tao Wu

Road detection is a key task for autonomous land vehicles. Monocular vision-based road-detection algorithms are mostly based on machine learning approaches and are usually cast as classification problems. However, the pixel-wise classifiers are faced with the ambiguity caused by changes in road appearance, illumination and weather. An effective way to reduce the ambiguity is to model the contextual information with structured learning and prediction. Currently, the widely used structured prediction model in road detection is the Markov random field or conditional random field. However, the random field-based methods require additional complex optimization after pixel-wise classification, making them unsuitable for real-time applications. In this paper, we present a structured random forest-based road-detection algorithm which is capable of modelling the contextual information efficiently. By mapping the structured label space to a discrete label space, the test function of each split node can be trained in a similar way to that of the classical random forests. Structured random forests make use of the contextual information of image patches as well as the structural information of the labels to get more consistent results. Besides this benefit, by predicting a batch of pixels in a single classification, the structured random forest-based road detection can be much more efficient than the conventional pixel-wise random forest. Experimental results tested on the KITTI-ROAD dataset and data collected in typical unstructured environments show that structured random forest-based road detection outperforms the classical pixel-wise random forest both in accuracy and efficiency.


intelligent vehicles symposium | 2014

Performance of global descriptors for velodyne-based urban object recognition

Tongtong Chen; Bin Dai; Daxue Liu; Jinze Song

Object Recognition is an essential component for Autonomous Land Vehicle (ALV) navigation in urban environments. This paper presents a thorough evaluation of the performance of some state of the art global descriptors on the public Sydney Urban Objects Dataset1, which was collected in the Central Business District of Sydney. These descriptors are Bounding Box descriptor, Histogram of Local Point Level descriptor, Hierarchy descriptor, and Spin Image (SI). We also propose a novel Global Fourier Histogram (GFH) descriptor. Experimental results on the public data set show that GFH descriptor turns out to be one of the best global descriptors for the object recognition in urban environments, and the results on the data collected by our own ALV in urban environments also demonstrate its usefulness.


Information Sciences | 2017

Hybrid conditional random field based camera-LIDAR fusion for road detection

Liang Xiao; Ruili Wang; Bin Dai; Yuqiang Fang; Daxue Liu; Tao Wu

Abstract Road detection is one of the key challenges for autonomous vehicles. Two kinds of sensors are commonly used for road detection: cameras and LIDARs. However, each of them suffers from some inherent drawbacks. Thus, sensor fusion is commonly used to combine the merits of these two kinds of sensors. Nevertheless, current sensor fusion methods are dominated by either cameras or LIDARs rather than making the best of both. In this paper, we extend the conditional random field (CRF) model and propose a novel hybrid CRF model to fuse the information from camera and LIDAR. After aligning the LIDAR points and pixels, we take the labels (either road or background) of the pixels and LIDAR points as random variables and infer the labels via minimization of a hybrid energy function. Boosted decision tree classifiers are learned to predict the unary potentials of both the pixels and LIDAR points. The pairwise potentials in the hybrid model encode (i) the contextual consistency in the image, (ii) the contextual consistency in the point cloud, and (iii) the cross-modal consistency between the aligned pixels and LIDAR points. This model integrates the information from the two sensors in a probabilistic way and makes good use of both sensors. The hybrid CRF model can be optimized efficiently with graph cuts to get road areas. Extensive experiments have been conducted on the KITTI-ROAD benchmark dataset and the experimental results show that the proposed method outperforms the current methods.


Journal of Field Robotics | 2013

Adaptive speed tracking control for autonomous land vehicles in all-terrain navigation: An experimental study

Jian Wang; Zhenping Sun; Xin Xu; Daxue Liu; Jinze Song; Yuqiang Fang

This paper develops a nonparametric controller with an internal model control (IMC) structure for the longitudinal speed tracking control of autonomous land vehicles by designing a proportional and internal model control (IMC) cascade (P-IMC) controller. An IMC architecture is employed in the inner control loop by establishing a nonparametric longitudinal dynamical model, whereas a P controller is designed for the outer control loop. An approach for estimating the terrain effects and compensating for the model errors is also introduced. The differences from other nonparametric controllers are discussed, and the stability of the P-IMC controller is analyzed and validated experimentally. The P-IMC controller is compared with the SpAM+PI to illustrate its advantages. The experimental results of autonomous all-terrain driving show the effectiveness of the P-IMC controller.


ieee intelligent vehicles symposium | 2015

A practical trajectory planning framework for autonomous ground vehicles driving in urban environments

Xiaohui Li; Zhenping Sun; Zhen He; Qi Zhu; Daxue Liu

This paper presents a practical trajectory planning framework towards fully autonomous driving in urban environments. Firstly, based on the behavioral decision commands, a reference path is extracted from the digital map using the LIDAR-based localization information. The reference path is refined and interpolated via a nonlinear optimization algorithm and a parametric algorithm, respectively. Secondly, the trajectory planning task is decomposed into spatial path planning and velocity profile planning. A closed-form algorithm is employed to generate a rich set of kinematically-feasible spatial path candidates within the curvilinear coordinate framework. At the same time, the velocity planning algorithm is performed with considering safety and smoothness constraints. The trajectory candidates are evaluated by a carefully developed objective function. Subsequently, the best collision-free and dynamically-feasible trajectory is selected and executed by the trajectory tracking controller. We implemented the proposed trajectory planning strategy on our test autonomous vehicle in the realistic urban traffic scenarios. Experimental results demonstrated its capability and efficiency to handle a variety of driving situations, such as lane keeping, lane changing, vehicle following, and static and dynamic obstacles avoiding, while respecting traffic regulations.


ieee intelligent vehicles symposium | 2015

Velodyne-based curb detection up to 50 meters away

Tongtong Chen; Bin Dai; Daxue Liu; Jinze Song; Zhao Liu

Long range curb detection is crucial for an Autonomous Land Vehicle (ALV) navigation in urban environments. This paper presents a novel curb detection algorithm which can detect the curbs up to 50 meters away with Velodyne LIDAR. Instead of building a Digital Elevation Map (DEM) and utilizing geometric features (like normal direction) to extract candidate curb points, we take each scan line of Velodyne LIDAR as a processing unite directly. Some feature points, which are extracted from individual scan lines, are selected as the initial curb points by the distance criterion and Hough Transform (HT). Eventually, iterative Gaussian Process Regression (GPR), which utilizes the above initial curb points as the initial seeds, is exploited to represent both the curved and straight-line curb model. In order to verify the effectiveness of our algorithm quantitatively, 2934 Velodyne scans are collected in various urban scenes with our ALV, and 566 of them are labelled manually1. Our algorithm is also compared with two other curb detection techniques. The experimental results on the dataset show promising performance.

Collaboration


Dive into the Daxue Liu's collaboration.

Top Co-Authors

Avatar

Bin Dai

National University of Defense Technology

View shared research outputs
Top Co-Authors

Avatar

Tongtong Chen

National University of Defense Technology

View shared research outputs
Top Co-Authors

Avatar

Zhenping Sun

National University of Defense Technology

View shared research outputs
Top Co-Authors

Avatar

Jinze Song

National University of Defense Technology

View shared research outputs
Top Co-Authors

Avatar

Qi Zhu

National University of Defense Technology

View shared research outputs
Top Co-Authors

Avatar

Xiaohui Li

National University of Defense Technology

View shared research outputs
Top Co-Authors

Avatar

Tao Wu

National University of Defense Technology

View shared research outputs
Top Co-Authors

Avatar

Hangen He

National University of Defense Technology

View shared research outputs
Top Co-Authors

Avatar

Liang Xiao

National University of Defense Technology

View shared research outputs
Top Co-Authors

Avatar

Zhao Liu

National University of Defense Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge