Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Shibin Yin is active.

Publication


Featured researches published by Shibin Yin.


Sensors | 2013

A Vision-Based Self-Calibration Method for Robotic Visual Inspection Systems

Shibin Yin; Yongjie Ren; Jigui Zhu; Shourui Yang; S.H. Ye

A vision-based robot self-calibration method is proposed in this paper to evaluate the kinematic parameter errors of a robot using a visual sensor mounted on its end-effector. This approach could be performed in the industrial field without external, expensive apparatus or an elaborate setup. A robot Tool Center Point (TCP) is defined in the structural model of a line-structured laser sensor, and aligned to a reference point fixed in the robot workspace. A mathematical model is established to formulate the misalignment errors with kinematic parameter errors and TCP position errors. Based on the fixed point constraints, the kinematic parameter errors and TCP position errors are identified with an iterative algorithm. Compared to the conventional methods, this proposed method eliminates the need for a robot-based-frame and hand-to-eye calibrations, shortens the error propagation chain, and makes the calibration process more accurate and convenient. A validation experiment is performed on an ABB IRB2400 robot. An optimal configuration on the number and distribution of fixed points in the robot workspace is obtained based on the experimental results. Comparative experiments reveal that there is a significant improvement of the measuring accuracy of the robotic visual inspection system.


Optical Engineering | 2012

Calibration technology in application of robot-laser scanning system

Yongjie Ren; Shibin Yin; Jigui Zhu

Abstract. A system composed of laser sensor and 6-DOF industrial robot is proposed to obtain complete three-dimensional (3-D) information of the object surface. Suitable for the different combining ways of laser sensor and robot, a new method to calibrate the position and pose between sensor and robot is presented. By using a standard sphere with known radius as a reference tool, the rotation and translation matrices between the laser sensor and robot are computed, respectively in two steps, so that many unstable factors introduced in conventional optimization methods can be avoided. The experimental results show that the accuracy of the proposed calibration method can be achieved up to 0.062 mm. The calibration method is also implemented into the automated robot scanning system to reconstruct a car door panel.


Sensors | 2017

Rapid Global Calibration Technology for Hybrid Visual Inspection System

Tao Liu; Shibin Yin; Yin Guo; Jigui Zhu

Vision-based methods for product quality inspection are playing an increasingly important role in modern industries for their good performance and high efficiency. A hybrid visual inspection system, which consists of an industrial robot with a flexible sensor and several stationary sensors, has been widely applied in mass production, especially in automobile manufacturing. In this paper, a rapid global calibration method for the hybrid visual inspection system is proposed. Global calibration of a flexible sensor is performed first based on the robot kinematic. Then, with the aid of the calibrated flexible sensor, stationary sensors are calibrated globally one by one based on homography. Only a standard sphere and an auxiliary target with a 2D planar pattern are applied during the system global calibration, and the calibration process can be easily re-performed during the system’s periodical maintenance. An error compensation method is proposed for the hybrid inspection system, and the final accuracy of the hybrid system is evaluated with the deviation and correlation coefficient between the measured results of the hybrid system and Coordinate Measuring Machine (CMM). An accuracy verification experiment shows that deviation of over 95% of featured points are less than ±0.3 mm, and the correlation coefficients of over 85% of points are larger than 0.7.


Sensors | 2018

A Novel Semi-Supervised Feature Extraction Method and Its Application in Automotive Assembly Fault Diagnosis Based on Vision Sensor Data

Xuan Zeng; Shibin Yin; Yin Guo; Jiarui Lin; Jigui Zhu

The fault diagnosis of dimensional variation plays an essential role in the production of an automotive body. However, it is difficult to identify faults based on small labeled sample data using traditional supervised learning methods. The present study proposed a novel feature extraction method named, semi-supervised complete kernel Fisher discriminant (SS-CKFDA), and a new fault diagnosis flow for automotive assembly was introduced based on this method. SS-CKFDA is a combination of traditional complete kernel Fisher discriminant (CKFDA) and semi-supervised learning. It adjusts the Fisher criterion with the data global structure extracted from large unlabeled samples. When the number of labeled samples is small, the global structure that exists in the measured data can effectively improve the extraction effects of the projected vector. The experimental results on Tennessee Eastman Process (TEP) data demonstrated that the proposed method can improve diagnostic performance, when compared to other Fisher discriminant algorithms. Finally, the experimental results on the optical coordinate data proves that the method can be applied in the automotive assembly process, and achieve a better performance.


AOPC 2015: Optical Test, Measurement, and Equipment | 2015

An accurate projector gamma correction method for phase-measuring profilometry based on direct optical power detection

Miao Liu; Shibin Yin; Shourui Yang; Zonghua Zhang

Digital projector is frequently applied to generate fringe pattern in phase calculation-based three dimensional (3D) imaging systems. Digital projector often works with camera in this kind of systems so the intensity response of a projector should be linear in order to ensure the measurement precision especially in Phase-Measuring Profilometry (PMP). Some correction methods are often applied to cope with the non-linear intensity response of the digital projector. These methods usually rely on camera and gamma function is often applied to compensate the non-linear response so the correction performance is restricted by the dynamic range of camera. In addition, the gamma function is not suitable to compensate the nonmonotonicity intensity response. This paper propose a gamma correction method by the precisely detecting the optical energy instead of using a plate and camera. A photodiode with high dynamic range and linear response is used to directly capture the light optical from the digital projector. After obtaining the real gamma curve precisely by photodiode, a gray level look-up table (LUT) is generated to correct the image to be projected. Finally, this proposed method is verified experimentally.


Robot | 2013

Fast Recovery Technology of Tool Center Point in Robotic Visual Measurement System

Shibin Yin; Yongjie Ren; Jigui Zhu; Shenghua Ye

To recover measurement quickly after robot collision in robotic online visual measurement system, two simple and feasible methods to recalibrate the tool center point(TCP) of robot quickly are proposed. When there is a robot collision in the industry field, only several measurements are performed by moving the robot, then the tool coordinate is recalibrated and the robot TCP is recovered accurately, which can effectively avert the complex work of re-teaching the robots measurement trajectory. The tool coordinate and TCP of the robot are defined in this paper first, based on the structure parameters of the light plane in visual sensor. Then the reference sphere based tool coordinate calibration method and common point deviation based calibration method are presented respectively. Simulation experiment testifies that the two methods can recover the robot tool coordinate and TCP quickly, and meet the demand of recovery measurement after robot collision in robotic visual measurement system.


Sensors | 2017

Monocular-Based 6-Degree of Freedom Pose Estimation Technology for Robotic Intelligent Grasping Systems

Tao Liu; Yin Guo; Shourui Yang; Shibin Yin; Jigui Zhu

Industrial robots are expected to undertake ever more advanced tasks in the modern manufacturing industry, such as intelligent grasping, in which robots should be capable of recognizing the position and orientation of a part before grasping it. In this paper, a monocular-based 6-degree of freedom (DOF) pose estimation technology to enable robots to grasp large-size parts at informal poses is proposed. A camera was mounted on the robot end-flange and oriented to measure several featured points on the part before the robot moved to grasp it. In order to estimate the part pose, a nonlinear optimization model based on the camera object space collinearity error in different poses is established, and the initial iteration value is estimated with the differential transformation. Measuring poses of the camera are optimized based on uncertainty analysis. Also, the principle of the robotic intelligent grasping system was developed, with which the robot could adjust its pose to grasp the part. In experimental tests, the part poses estimated with the method described in this paper were compared with those produced by a laser tracker, and results show the RMS angle and position error are about 0.0228° and 0.4603 mm. Robotic intelligent grasping tests were also successfully performed in the experiments.


Measurement | 2014

Development and calibration of an integrated 3D scanning system for high-accuracy large-scale metrology

Shibin Yin; Yongjie Ren; Yin Guo; Jigui Zhu; Shourui Yang; S.H. Ye


Precision Engineering-journal of The International Societies for Precision Engineering and Nanotechnology | 2015

A multilevel calibration technique for an industrial robot with parallelogram mechanism

Yin Guo; Shibin Yin; Yongjie Ren; Jigui Zhu; Shourui Yang; S.H. Ye


The International Journal of Advanced Manufacturing Technology | 2014

Real-time thermal error compensation method for robotic visual inspection system

Shibin Yin; Yin Guo; Yongjie Ren; Jigui Zhu; Shourui Yang; S.H. Ye

Collaboration


Dive into the Shibin Yin's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Shourui Yang

Tianjin University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Zonghua Zhang

Hebei University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge