Junhua Sun
Beihang University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Junhua Sun.
Optical Engineering | 2010
Guangjun Zhang; Zhen Liu; Junhua Sun; Zhenzhong Wei
We focus on two key problems in the calibration of multi-sensor visual measurement systems based on structured light: the calibration of the structured light vision sensor, and the global calibration of multiple vision sensors. In the calibration of the vision sensor, the light-plane equation is computed by combining the Plucker matrices of light stripes obtained at different target positions. Since the light-plane equation is optimized by using all the light-stripe center points, the robustness and accuracy of calibration are considerably improved. For the global calibration of multiple vision sensors, the relative positions of the two vision sensors with non-overlapping fields of view are calibrated by means of two planar targets (fixed together), using the constraint that the relative positions of the two targets are invariable. The mutual transformations between the two targets need not be known. Using one of the vision sensors as the base vision sensor, the global calibration of multiple vision sensors is achieved by calibrating each of the other vision sensors with the base vision sensor. The proposed method has already been successfully applied in practic.
Measurement Science and Technology | 2011
Zhen Liu; Guangjun Zhang; Zhenzhong Wei; Junhua Sun
The global calibration of multiple vision sensors (MVS) has been widely studied in the last two decades. In this paper, we present a global calibration method for MVS with non-overlapping fields of view (FOVs) using multiple targets (MT). MT is constructed by fixing several targets, called sub-targets, together. The mutual coordinate transformations between sub-targets need not be known. The main procedures of the proposed method are as follows: one vision sensor is selected from MVS to establish the global coordinate frame (GCF). MT is placed in front of the vision sensors for several (at least four) times. Using the constraint that the relative positions of all sub-targets are invariant, the transformation matrix from the coordinate frame of each vision sensor to GCF can be solved. Both synthetic and real experiments are carried out and good result is obtained. The proposed method has been applied to several real measurement systems and shown to be both flexible and accurate. It can serve as an attractive alternative to existing global calibration methods.
Applied Optics | 2016
Zheng Gong; Junhua Sun; Guangjun Zhang
Wheel diameter is a significant parameter related to the safety and comfort level of trains. The traditional circle fitting method can be sensitive to noise because of the small distribution of contact points. This paper proposes a structured-light measurement based on the cycloid constraint to obtain a wide distribution of contact points. The wheel is measured twice while running. The cycloid is used to integrate all the contact points to one circle. Simulations are implemented to analyze the possible errors, and the results are compared with the traditional method, which shows the method is more precise and has good stability against 3D reconstruction noise and out-of-roundness. The physical experiment also shows that the result is precise and robust.
Measurement Science and Technology | 2014
Jie Zhang; Junhua Sun; Zhen Liu; Guangjun Zhang
Laser displacement sensors (LDSs) are widely used in online measurement owing to their characteristics of non-contact, high measurement speed, etc. However, existing calibration methods for LDSs based on the traditional triangulation measurement model are time-consuming and tedious to operate. In this paper, a calibration method for LDSs based on a vision measurement model of the LDS is presented. According to the constraint relationships of the model parameters, the calibration is implemented by freely moving a stereo-target at least twice in the field of view of the LDS. Both simulation analyses and real experiments were conducted. Experimental results demonstrate that the calibration method achieves an accuracy of 0.044 mm within the measurement range of about 150 mm. Compared to traditional calibration methods, the proposed method has no special limitation on the relative position of the LDS and the target. The linearity approximation of the measurement model in the calibration is not needed, and thus the measurement range is not limited in the linearity range. It is easy and quick to implement the calibration for the LDS. The method can be applied in wider fields.
Chinese Journal of Mechanical Engineering | 2012
Qianzhe Liu; Junhua Sun; Zhen Liu; Guangjun Zhang
Multi-sensor vision system plays an important role in the 3D measurement of large objects. However, due to the widely distribution of sensors, the problem of lacking common fields of view (FOV) arises frequently, which makes the global calibration of the vision system quite difficult. The primary existing solution relies on large-scale surveying equipments, which is ponderous and inconvenient for field calibrations. In this paper, a global calibration method of multi-sensor vision system is proposed and investigated. The proposed method utilizes pairs of skew laser lines, which are generated by a group of laser pointers, as the calibration objects. Each pair of skew laser lines provides a unique coordinate system in space which can be reconstructed in certain vision sensor’s coordinates by using a planar pattern. Then the geometries of sensors are computed under rigid transformation constrains by taking coordinates of each skew lines pair as the intermediary. The method is applied on both visual cameras with synthetic data and a real two-camera vision system; results show the validity and good performance. The prime contribution of this paper is taking skew laser lines as the global calibration objects, which makes the method simple and flexible. The method need no expensive equipments and can be used in large-scale calibration.
Image and Vision Computing | 2016
Junhua Sun; Jie Zhang; Guangjun Zhang
3D point cloud registration is a fundamental and critical issue in 3D reconstruction and object recognition. Most of the existing methods are based on local shape descriptor. In this paper, we propose a discriminative and robust local shape descriptor-Regional Curvature Map (RCM). The keypoint and its neighboring points are firstly projected onto a 2D plane according to a robust mapping against normal errors. Then, the projection points are quantized into corresponding bins of the 2D support region and their weighted curvatures are encoded into a curvature distribution image. Based on the RCM, an efficient and accurate 3D point cloud registration method is presented. We firstly find 3D point correspondences by a RCM searching and matching strategy based on the sub-regions of the RCM. Then, a coarse registration can be achieved with geometrically consistent point correspondences, followed by a fine registration based on a modified iterative closest point (ICP) algorithm. The experimental results demonstrate that the RCM is distinctive and robust against normal errors and varying point cloud density. The corresponding registration method can achieve a higher registration precision and efficiency compared with two existing registration methods. A novel 3D shape descriptor-Regional Curvature Map (RCM) is proposed.An automatic 3D point cloud registration method based on RCMs is introduced.The RCM is discriminative and robust against normal errors and varying point cloud density.The RCM searching and matching strategy reduces the influence of area with missing data.The RCM-based registration method achieves a higher registration rate and efficiency.
Optical Engineering | 2013
Qianzhe Liu; Junhua Sun; Yuntao Zhao; Zhen Liu
Abstract. In some computer vision applications, it is necessary to calibrate the geometry relationships of nonoverlapping cameras. However, due to lacking a common field of view, the calibration of this camera topology is quite difficult. A calibration method for nonoverlapping cameras is proposed and investigated. The proposed method utilizes several light planes, which can be generated by a line laser projector or a rotary laser level, as the calibration objects. The fact that local light planes available in different cameras are identical in global coordinates is used to recover the geometries. Results on both synthetic and real data show the validity and performance of the proposed method. The given method is simple and flexible, which can be used to calibrate geometry relationships of cameras located in large-scale space without expensive equipment such as theodolites and laser trackers.
Applied Optics | 2015
Xiaoming Chen; Junhua Sun; Zhen Liu; Guangjun Zhang
Dynamic tread wear measurement is difficult but significant for railway transportation safety and efficiency. The accuracy of existing methods is inclined to be affected by environmental vibrations since they are highly dependent on the accurate calibration of the relative pose between vision sensors. In this paper, we present a method to obtain full wheel profiles based on automatic registration of vision sensor data instead of traditional global calibrations. We adopt two structured light vision sensors to recover the inner and outer profiles of each wheel, and register them by the iterative closest point algorithm. Computer simulations show that the proposed method is insensitive to noises and relative pose vibrations. Static experiments demonstrate that our method has high accuracy and great repeatability. Dynamic experiments show that the measurement accuracy of our method is about 0.18 mm, which is a twofold improvement over traditional methods.
Seventh International Symposium on Instrumentation and Control Technology: Optoelectronic Technology and Instruments, Control Theory and Automation, and Space Exploration | 2008
Ziyan Wu; Junhua Sun; Qianzhe Liu; Guangjun Zhang
Large-scale stereo vision sensor is of great importance in the measurement for large free-form surface. The intrinsic parameters of cameras and the structure parameters of the stereo vision sensor should be calibrated beforehand. Traditional methods are mainly based on planar and 3D targets which are expensive and difficult to manufacture, especially for large dimension ones. A calibration method for stereo vision sensor based on one-dimensional targets is proposed. First random place two 1D targets, and acquire multiple images of the targets from different angles of view with camera. Solve the intrinsic parameters of camera with the constraint that the spatial angle of the two ID target are constant. Then set up the stereo vision sensor with two calibrated cameras, and acquire multiple images of a 1D target of unknown motion. Based on the constraint of the known distance between two feature points on the target, estimate the initial value of the structure parameters with linear method and the precise structure parameters of stereo vision sensor with non-linear optimization method by setting up the minimizing function involving the scale factors. Experimental results show that, the measurement precision of the stereo vision sensor is 0.052mm, with the working distance of 3500mm and the measurement scale of 4000mm × 3000mm. The method proposed is proved to be suitable for field calibration of stereo vision sensor in application of large-scale measurement for its easy operation and high efficiency.
Applied Optics | 2013
Junhua Sun; Jie Zhang; Zhen Liu; Guangjun Zhang
It is currently difficult to achieve good real-time dynamic angle measurements with high accuracy and large ranges. In this paper, a photoelectric measurement method for dynamic angles based on three laser displacement sensors (LDSs) is proposed. Offline, a dynamic angle vision measurement model is established, and the system is calibrated by using a planar target moved by a 2D moving platform. In the course of measurement, three laser beams emitted from three LDSs are projected onto a rotating plane, and three noncollinear points are acquired synchronously; then the rotation angle is calculated in real time. Simulations verify the feasibility of the method theoretically. Experimental results demonstrate that the method achieves measurement accuracies of 0.008° and 0.046° under quasi-static condition of 80°/s and highly dynamic condition of 1000°/s within the measurement range of about ±40°, respectively.