Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Fuqiang Zhou is active.

Publication


Featured researches published by Fuqiang Zhou.


Optics Express | 2014

Precise calibration of binocular vision system used for vision measurement

Yi Cui; Fuqiang Zhou; Yexin Wang; Liu Liu; He Gao

Binocular vision calibration is of great importance in 3D machine vision measurement. With respect to binocular vision calibration, the nonlinear optimization technique is a crucial step to improve the accuracy. The existing optimization methods mostly aim at minimizing the sum of reprojection errors for two cameras based on respective 2D image pixels coordinate. However, the subsequent measurement process is conducted in 3D coordinate system which is not consistent with the optimization coordinate system. Moreover, the error criterion with respect to optimization and measurement is different. The equal pixel distance error in 2D image plane leads to diverse 3D metric distance error at different position before the camera. To address these issues, we propose a precise calibration method for binocular vision system which is devoted to minimizing the metric distance error between the reconstructed point through optimal triangulation and the ground truth in 3D measurement coordinate system. In addition, the inherent epipolar constraint and constant distance constraint are combined to enhance the optimization process. To evaluate the performance of the proposed method, both simulative and real experiments have been carried out and the results show that the proposed method is reliable and efficient to improve measurement accuracy compared with conventional method.


Optical Engineering | 2003

Distortion correction for a wide-angle lens based on real-time digital image processing

Jie Jiang; Guangjun Zhang; Fuqiang Zhou; Daoyin Yu; Hongbo Xie; Hang Liu

Images captured with wide-angle lenses suffer from spatial distortion, which prevents accurate measure and feature extraction. In this work, a mathematical model based on polynomial mapping is used to map the images from distorted image space onto the corrected image space. The model parameters include the polynomial coefficients, distortion center, and dot center. A new technique to estimate the distortion center based on the evaluation function is presented. In dot center estimation, a seed fill algorithm is used to traverse all pixels. The expansion polynomial was obtained by using cubic spline interpolation. With a field programmable gate array (FPGA), the real-time distortion correction is implemented, which is independent of a computer. The major benefit using FPGA is that the same circuit can be used for other circularly symmetric wide-angle lenses without being modified.


Journal of Real-time Image Processing | 2016

Fast star centroid extraction algorithm with sub-pixel accuracy based on FPGA

Fuqiang Zhou; Jingxin Zhao; Tao Ye; Lipeng Chen

Spacecraft’s attitude information plays an important role in celestial navigation. The attitude is mainly determined by matching the star’s centroid in the obtained image with its corresponding information in star catalog. Generally, the star image can be regarded as a spot with a diameter <5 pixels. Therefore, it is very difficult to extract the star centroid with sub-pixel accuracy, especially in the hardware system, such as FPGAs. The existing spot centroid extraction methods with high accuracy require plenty of pixels to realize the complex computations. Limited to the star’s diameter and hardware requirements, such methods are not suitable for star centroid extraction in hardware system. To solve the problem, a two-step extraction method for star centroid with sub-pixel accuracy is proposed. The maximum pixel-level center can be located through zero crossing of the first derivative in a small region. Taking the pixel-level center as the middle of the window with fixed size, the sub-pixel offsets to the sub-pixel center can be calculated using fixed window weighted centroid method. The sub-pixel center of the star is then obtained by adding the offsets to the pixel-level center. This method can be implemented in hardware to increase processing speed, using Verilog hardware description languages. A simulation is performed on computer and FPGA. Experimental results show the excellent performance in accuracy and processing speed of two-step method. In addition, two-step method has strong ability of resisting noise and good robustness compared to other methods.


IEEE Sensors Journal | 2014

Three-Dimensional Measurement Approach in Small FOV and Confined Space Using an Electronic Endoscope

Fuqiang Zhou; Yexin Wang; Liu Liu; Yi Cui; He Gao

It is difficult to implement three-dimensional (3D) measurement in small field of view (FOV) or confined space with traditional sensors, for they cannot be put into or operated flexibly in such circumstances. To solve the problem, a sensor constructed by an electronic endoscope and a pair of mirrors is designed, combining the flexible characteristics of the endoscope transmission wire and the advantages of stereo technology. The calibration of the sensor and two corresponding points matching methods are described. For applications as diameter measurement of 3-D circle, an optimization method is used which directly obtains the diameter using the recovered 3-D points. The experiments show calibration and diameter measurement are of high accuracy, which provide the potential of expanding computer vision applications particularly in small FOV and confined environments.


Sensors | 2014

Detection of Foreign Matter in Transfusion Solution Based on Gaussian Background Modeling and an Optimized BP Neural Network

Fuqiang Zhou; Zhen Su; Xinghua Chai; Lipeng Chen

This paper proposes a new method to detect and identify foreign matter mixed in a plastic bottle filled with transfusion solution. A spin-stop mechanism and mixed illumination style are applied to obtain high contrast images between moving foreign matter and a static transfusion background. The Gaussian mixture model is used to model the complex background of the transfusion image and to extract moving objects. A set of features of moving objects are extracted and selected by the ReliefF algorithm, and optimal feature vectors are fed into the back propagation (BP) neural network to distinguish between foreign matter and bubbles. The mind evolutionary algorithm (MEA) is applied to optimize the connection weights and thresholds of the BP neural network to obtain a higher classification accuracy and faster convergence rate. Experimental results show that the proposed method can effectively detect visible foreign matter in 250-mL transfusion bottles. The misdetection rate and false alarm rate are low, and the detection accuracy and detection speed are satisfactory.


Advances in Mechanical Engineering | 2014

3D Wide FOV Scanning Measurement System Based on Multiline Structured-Light Sensors

He Gao; Fuqiang Zhou; Bin Peng; Yexin Wang; Haishu Tan

Structured-light three-dimensional (3D) vision measurement is currently one of the most common approaches to obtain 3D surface data. However, the existing structured-light scanning measurement systems are primarily constructed on the basis of single sensor, which inevitably generates three obvious problems: limited measurement range, blind measurement area, and low scanning efficiency. To solve these problems, we developed a novel 3D wide FOV scanning measurement system which adopted two multiline structured-light sensors. Each sensor is composed of a digital CCD camera and three line-structured-light projectors. During the measurement process, the measured object is scanned by the two sensors from two different angles at a certain speed. Consequently, the measurement range is expanded and the blind measurement area is reduced. More importantly, since six light stripes are simultaneously projected on the object surface, the scanning efficiency is greatly improved. The Multiline Structured-light Sensors Scanning Measurement System (MSSS) is calibrated on site by a 2D pattern. The experimental results show that the RMS errors of the system for calibration and measurement are less than 0.092 mm and 0.168 mm, respectively, which proves that the MSSS is applicable for obtaining 3D object surface with high efficiency and accuracy.


Optics and Lasers in Engineering | 2005

Constructing feature points for calibrating a structured light vision sensor by viewing a plane from unknown orientations

Fuqiang Zhou; Guangjun Zhang; Jie Jiang


Optics and Lasers in Engineering | 2013

Line-based camera calibration with lens distortion correction from a single image

Fuqiang Zhou; Yi Cui; He Gao; Yexin Wang


Optics and Lasers in Engineering | 2013

Accurate and robust estimation of camera parameters using RANSAC

Fuqiang Zhou; Yi Cui; Yexin Wang; Liu Liu; He Gao


Measurement | 2013

A novel way of understanding for calibrating stereo vision sensor constructed by a single camera and mirrors

Fuqiang Zhou; Yexin Wang; Bin Peng; Yi Cui

Collaboration


Dive into the Fuqiang Zhou's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge