Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Bingwei Hui is active.

Publication


Featured researches published by Bingwei Hui.


Optical Engineering | 2012

Line-scan camera calibration in close-range photogrammetry

Bingwei Hui; Gongjian Wen; Zhuxin Zhao; Deren Li

A novel line-scan camera calibration method in close-range photogrammetry is proposed. Since the line-scan camera is only sensing in one dimension, its hard to recognize the space points from the linear data captured in static state. To address this problem, the camera is fixed to a programmable linear stage. With the help of the linear stage, a scan image of the pattern is grabbed by the line-scan camera in uniform rectilinear motion state. Therefore, the image points are definitely matched with the space points on the pattern. A pair of projective equations is established to describe this dynamic imaging model, which is determined by six extrinsic camera parameters, five intrinsic camera parameters and three other motion parameters. All the fourteen parameters are estimated approximately by using the direct linear transformation of a reasonably simplified camera model firstly, and then the results are further refined by non-linear least square mean (LSM). Both computer simulated data and real data are used to test our calibration method. The robustness and accuracy are verified by lots of simulated experiments, and for the real data, the root mean square error of re-projected points is less than 0.3 pixels.


IEEE Transactions on Instrumentation and Measurement | 2013

A Novel Line Scan Camera Calibration Technique With an Auxiliary Frame Camera

Bingwei Hui; Gongjian Wen; Peng Zhang; Deren Li

A practical line scan camera calibration technique for close-range photogrammetric applications is proposed. It is implemented by rigidly coupling the line scan camera to an auxiliary frame camera whose intrinsic parameters have been obtained in advance. Then, the calibration is divided into two independent stages. First, images of a 2-D dynamic pattern are acquired by the two cameras from several different views. Based on these images and line scan camera model, intrinsic parameters of the line scan camera and rigid transform parameters between the two coupled cameras are calibrated. This work can be accomplished previously in workroom. Secondly, in photogrammetry, extrinsic parameters of the line scan camera are determined indirectly via space resection of the auxiliary frame camera and the obtained rigid transform parameters of the two cameras. Experiments show that our calibration can provide robust and accurate results.


IEEE Geoscience and Remote Sensing Letters | 2016

CFAR Detection of Moving Range-Spread Target in White Gaussian Noise Using Waveform Contrast

Xiaoliang Yang; Gongjian Wen; Conghui Ma; Bingwei Hui; Baiyuan Ding; YunHua Zhang

In wideband stepped-frequency radar systems, relative motions between radar and target can induce high distortions of high-resolution range profiles (HRRPs), such as range migration, shape deformation, and signal-to-noise-ratio (SNR) loss. These distortions, if ignored, can lead to unacceptable performance deterioration in detection. To solve this problem, a new algorithm for detecting moving range-spread targets (RSTs) is proposed in this letter. The proposed detector utilizes the waveform contrast of the HRRP to perform both motion compensation and constant false-alarm rate target detection, and it is simple and robust even for low-SNR scenarios. Simulated experiments are carried out to verify the effectiveness and advantages of the proposed detector.


Optical Engineering | 2012

Determination of line scan camera parameters via the direct linear transformation

Bingwei Hui; Jinrong Zhong; Gongjian Wen; Deren Li

Abstract. A direct linear transformation (DLT) model is derived to describe the scan imagery of a line scan camera undergoing a uniform rectilinear motion. When more than five points on scan image and their corresponding three-dimensional space points are substituted in the DLT model, 11 coefficients are settled directly and linearly without any approximations. After that, the 11 physically meaningful line scan camera parameters are worked out from the 11 DLT coefficients through a group of analytical operations. The performance is tested and verified by both simulated experiment and demonstration of a real line scan camera.


international conference on digital image processing | 2018

3D object recognition based on improved point cloud descriptors

weiwei wen; Gongjian Wen; Bingwei Hui; Shaohua Qiu

For 3D object recognition, a discriminative point cloud descriptor is required to represent the object. The existing global descriptors encode the whole object into a vector but they are sensitive to occlusion. On the contrary, the local descriptors encode only a small neighbor of a key point and are more robust to occlusion, but many objects have the same local surface. This paper presents a novel mixture method which segments a point cloud into multiple subparts to overcome the above shortcomings. In offline training stage, we propose to build up the model library that integrates both global and local surface of partial point clouds. In online recognition stage, the scene objects are represented by its subparts, and a voting scheme is performed for the recognition of scene objects. Experimental results on public datasets show that the proposed method promotes the recognition performance significantly compared to the conventional global and local descriptors.


Image and Vision Computing | 2018

LCO: Lightweight Convolution Operators for Fast Tracking

Dongdong Li; Gongjian Wen; Yangliu Kuai; Bingwei Hui

Abstract In recent years, Discriminative Correlation Filters (DCFs) based trackers have achieved continuous performance improvement due to sophisticated learning models (e.g. HCF [1]) or multiple feature integration (e.g. CCOT [2]). However, the increasingly complex model introduces a massive number of trainable parameters in the correlation filter. This significantly slows down the tracking speed and increases the risk of over-fitting. In this work, we tackle the problems of model complexity and over-fitting by introducing Lightweight Convolution Operators (LCO). Our LCO tracker performs dimensionality reduction and spatial constraints on the correlation filters to reduce the model complexity and accelerate the tracking speed. Compared with the baseline method, LCO reduces over 90% of the redundant trainable parameters in the tracking model. We perform experiments on three benchmarks: OTB2013, OTB100 and VOT2016. On OTB100, LCO runs at 24 fps with hand-crafted features on CPU and at 30 fps with shallow convolutional features on GPU. With shallow convolutional features, LCO obtains 65.8% in AUC of the success plots on OTB100. On VOT2016, our tracker ranks second in Expected Average Overlap (EAO) and first in Equivalent Filter Operations (EFO) compared with the top 5 trackers.


Journal of Applied Remote Sensing | 2017

Three-dimensional electromagnetic-model-based absolute attitude measurement using monostatic wideband radar

Xiaoliang Yang; Gongjian Wen; Conghui Ma; Bingwei Hui

Abstract. This paper proposes an absolute attitude measurement approach by utilizing a monostatic wideband radar. In this approach, the three-dimensional electromagnetic-model (3-D em-model) and the parametric motion model of a target are combined to estimate absolute attitude. The 3-D em-model is established offline based on the target’s geometric structure. Scattering characteristics such as radar cross section and radar images from one-dimension to 3-D can be conveniently predicted by this model. By matching the high-resolution range profiles (HRRPs) of measurements with the HRRPs predicted by the 3-D em-model, the directions of the lines of sight relative to the target at different measuring times are first obtained. Then, based on the obtained directions and the parametric motion model of the target, the target absolute attitude at each measuring time can be acquired. Experiments using both data predicted by a high-frequency em-code and data measured in an anechoic chamber verify the validity of the proposed method.


7th International Symposium on Advanced Optical Manufacturing and Testing Technologies: Optical Test and Measurement Technology and Equipment | 2014

3D reconstruction with two webcams and a laser line projector

Dongdong Li; Bingwei Hui; Shaohua Qiu; Gongjian Wen

Three-dimensional (3D) reconstruction is one of the most attractive research topics in photogrammetry and computer vision. Nowadays 3D reconstruction with simple and consumable equipment plays an important role. In this paper, a 3D reconstruction desktop system is built based on binocular stereo vision using a laser scanner. The hardware requirements are a simple commercial hand-held laser line projector and two common webcams for image acquisition. Generally, 3D reconstruction based on passive triangulation methods requires point correspondences among various viewpoints. The development of matching algorithms remains a challenging task in computer vision. In our proposal, with the help of a laser line projector, stereo correspondences are established robustly from epipolar geometry and the laser shadow on the scanned object. To establish correspondences more conveniently, epipolar rectification is employed using Bouguet’s method after stereo calibration with a printed chessboard. 3D coordinates of the observed points are worked out with rayray triangulation and reconstruction outliers are removed with the planarity constraint of the laser plane. Dense 3D point clouds are derived from multiple scans under different orientations. Each point cloud is derived by sweeping the laser plane across the object requiring 3D reconstruction. The Iterative Closest Point algorithm is employed to register the derived point clouds. Rigid body transformation between neighboring scans is obtained to get the complete 3D point cloud. Finally polygon meshes are reconstructed from the derived point cloud and color images are used in texture mapping to get a lifelike 3D model. Experiments show that our reconstruction method is simple and efficient.


IEEE Geoscience and Remote Sensing Letters | 2015

A 3-D Electromagnetic-Model-Based Algorithm for Absolute Attitude Measurement Using Wideband Radar

Xiaoliang Yang; Gongjian Wen; Jinrong Zhong; Bingwei Hui; Conghui Ma


symposium on photonics and optoelectronics | 2011

A Method for the Motion Parameters Estimation in Incomplete Synchro-Ballistic Photography

Zhuxin Zhao; Bingwei Hui; Gongjian Wen; Deren Li

Collaboration


Dive into the Bingwei Hui's collaboration.

Top Co-Authors

Avatar

Gongjian Wen

National University of Defense Technology

View shared research outputs
Top Co-Authors

Avatar

Deren Li

National University of Defense Technology

View shared research outputs
Top Co-Authors

Avatar

Conghui Ma

National University of Defense Technology

View shared research outputs
Top Co-Authors

Avatar

Jinrong Zhong

National University of Defense Technology

View shared research outputs
Top Co-Authors

Avatar

Shaohua Qiu

National University of Defense Technology

View shared research outputs
Top Co-Authors

Avatar

Xiaoliang Yang

National University of Defense Technology

View shared research outputs
Top Co-Authors

Avatar

Zhuxin Zhao

National University of Defense Technology

View shared research outputs
Top Co-Authors

Avatar

Baiyuan Ding

National University of Defense Technology

View shared research outputs
Top Co-Authors

Avatar

Dongdong Li

National University of Defense Technology

View shared research outputs
Top Co-Authors

Avatar

Peng Zhang

National University of Defense Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge