Zhaoqin Liu
Chinese Academy of Sciences
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Zhaoqin Liu.
Journal of Earth Science | 2013
Kaichang Di; Z. Yue; Zhaoqin Liu; Shuliang Wang
A new object-oriented method has been developed for the extraction of Mars rocks from Mars rover data. It is based on a combination of Mars rover imagery and 3D point cloud data. First, Navcam or Pancam images taken by the Mars rovers are segmented into homogeneous objects with a mean-shift algorithm. Then, the objects in the segmented images are classified into small rock candidates, rock shadows, and large objects. Rock shadows and large objects are considered as the regions within which large rocks may exist. In these regions, large rock candidates are extracted through ground-plane fitting with the 3D point cloud data. Small and large rock candidates are combined and postprocessed to obtain the final rock extraction results. The shape properties of the rocks (angularity, circularity, width, height, and width-height ratio) have been calculated for subsequent geological studies.
Photogrammetric Engineering and Remote Sensing | 2011
Kaichang Di; Zhaoqin Liu; Z. Yue
Mars rover localization is usually realized with data from odometers, inertial measurement units, and stereo cameras. Location errors accumulate inevitably during any long-range rover traverse when data from only these ground-based sensors is employed. This paper presents a new approach to rover localization based on feature extraction and matching between ground (rover) and orbital imagery. This new approach can localize the rover in orbital imagery, eliminating the accumulated localization error and thereby improving the localization accuracy for long range rover traverse. The proposed approach is tested using NAVCAM images acquired by the Spirit and Opportunity rovers at multiple positions along with HIRISE orbital imagery covering the two landing sites. Results show that this new approach is effective in areas where there are outstanding rocks or outcroppings. The accuracy of this new localization approach is better than one pixel of the HIRISE image (which is 0.25 m).
Journal of Navigation | 2012
Chun Liu; Fagen Zhou; Yiwei Sun; Kaichang Di; Zhaoqin Liu
Visual navigation is comparatively advanced without a Global Positioning System (GPS). It obtains environmental information via real-time processing of the data gained through visual sensors. Compared with other methods, visual navigation is a passive method that does not launch light or other radiation applications, thus making it easier to hide. The novel navigation system described in this paper uses stereo-matching combined with Inertial Measurement Units (IMU). This system applies photogrammetric theory and a matching algorithm to identify the matching points of two images of the same scene taken from different views and obtains their 3D coordinates. Integrated with the orientation information output by the IMU, the system reduces model-accumulated errors and improves the point accuracy.
Sensors | 2014
Kai Wu; Kaichang Di; Xun Sun; W. Wan; Zhaoqin Liu
Visual odometry provides astronauts with accurate knowledge of their position and orientation. Wearable astronaut navigation systems should be simple and compact. Therefore, monocular vision methods are preferred over stereo vision systems, commonly used in mobile robots. However, the projective nature of monocular visual odometry causes a scale ambiguity problem. In this paper, we focus on the integration of a monocular camera with a laser distance meter to solve this problem. The most remarkable advantage of the system is its ability to recover a global trajectory for monocular image sequences by incorporating direct distance measurements. First, we propose a robust and easy-to-use extrinsic calibration method between camera and laser distance meter. Second, we present a navigation scheme that fuses distance measurements with monocular sequences to correct the scale drift. In particular, we explain in detail how to match the projection of the invisible laser pointer on other frames. Our proposed integration architecture is examined using a live dataset collected in a simulated lunar surface environment. The experimental results demonstrate the feasibility and effectiveness of the proposed method.
IOP Conference Series: Earth and Environmental Science | 2014
Kaichang Di; Yiliang Liu; Wenmin Hu; Z. Yue; Zhaoqin Liu
A vast amount of Mars images have been acquired by orbital missions in recent years. With the increase of spatial resolution to metre and decimetre levels, fine-scale geological features can be identified, and surface change detection is possible because of multi-temporal images. This study briefly reviews detectable changes on the Mars surface, including new impact craters, gullies, dark slope streaks, dust devil tracks and ice caps. To facilitate fast and efficient change detection for subsequent scientific investigations, a featured-based change detection method is developed based on automatic image registration, surface feature extraction and difference information statistics. Experimental results which use multi-temporal images demonstrate the promising potential of the proposed method.
Computers & Geosciences | 2014
Zhaoqin Liu; Man Peng; Kaichang Di
A new digital elevation model (DEM) is presented for accurate surface representation in photogrammetric processing of stereo ground-based imagery. This model is named the continuative variable resolution DEM (cvrDEM). In contrast to traditional grid-based DEMs that have only one fixed resolution, this new model can provide resolutions that vary depending on the range represented in the ground-based imagery. Functions for deriving radial and angular resolutions from the cvrDEM have been derived, and a corresponding storage structure for the polar coordinates has been developed. Experimental results using publically available NASA Mars Exploration Rover 2003 imagery demonstrate the effectiveness of the cvrDEM model: It can significantly reduce storage space while fully maintaining the most useful level of mapping accuracy relevant to the range from the imaging station. A terrestrial laser scanning data set was also used to validate the effectiveness of the cvrDEM. A continuative variable resolution digital elevation model (cvrDEM) is proposed.cvrDEM maintains different mapping accuracies at different ranges.cvrDEM reduces storage space compared with grid DEM.cvrDEM is particularly valuable for DEM derived from ground-based imagery.Experimental results using Mars imagery and terrestrial laser scanning data demonstrated the effectiveness of cvrDEM.
Remote Sensing of the Environment: 18th National Symposium on Remote Sensing of China | 2014
Yiwei Sun; Bin Liu; Xiliang Sun; W. Wan; Kaichang Di; Zhaoqin Liu
Image rectification is a common task in remote sensing application and usually time-consuming for large-size images. Based on the characteristics of the Rational Functional Model (RFM)-based rectification process, this paper proposes a novel CPU/GPU collaborative approach to high-speed rectification of remote sensing images. Three performance optimization strategies are presented in detail, including maximizing device occupancy, improving memory access efficiency and increasing instruction throughput. Experimental results using SPOT-5 and ZiYuan-3 (ZY3) remote sensing images show that the proposed method can achieve the processing speed up to 8GB/min, which significantly exceeds that of common commercial software. Real-time remote sensing image rectification can be expected with further optimized algorithm and more efficient I/O operation.
Remote Sensing of the Environment: The 17th China Conference on Remote Sensing | 2010
W. Wan; Zhaoqin Liu; Kaichang Di
This paper proposes a rover localization method from long stereo image sequences by using visual odometry based on bundle adjustment. Firstly, a progressive stereo image network is built by feature tracking in multiple frames of the stereo image sequence. Then exterior orientation parameters of all images in the network are solved by using bundle adjustment technique to get the rover position. Field experimental results demonstrate that the developed method can localize the rover in real time (2fps) with an accuracy of between than 1.5%.
Chinese Conference on Image and Graphics Technologies | 2013
Chongyang Zhang; Kaichang Di; Zhaoqin Liu; W. Wan
Visual target tracking is one of the key technologies to implement full automatic exploration for a planetary rover and improve exploration efficiency. A novel visual tracking system is developed based on the Tracking-Learning-Detection (TLD) algorithm in combination with stereo image matching to achieve 3D tracking of a science target. Experimental results using stereo image sequences demonstrate the excellent performance of TLD tracking and the overall effectiveness of the 3D tracking.
Chinese Conference on Image and Graphics Technologies | 2013
W. Wan; Zhaoqin Liu; Kaichang Di
Localization of the rover and mapping of the surrounding terrain with high precision is critical to surface operations in planetary rover missions, such as rover traverse planning, hazard avoidance, and target approaching. It is also desirable for a future planetary rover to have real-time self-localization and mapping capabilities so that it can traverse longer distance and acquire more science data. In this research, we have developed a real-time high-precision method for planetary rover localization and topographic mapping. High precision localization is achieved through a new visual odometry (VO) algorithm based on bundle adjustment of an image network with adaptive selection of geometric key frames (GKFs). Local topographic mapping products are generated simultaneously in real time based on the localization results. Continuous topographic products of the entire traverse area are generated offline. Field experimental results demonstrate the effectiveness and high-precision of the proposed method.