Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Youfu Li is active.

Publication


Featured researches published by Youfu Li.


IEEE Transactions on Image Processing | 2008

Vision Processing for Realtime 3-D Data Acquisition Based on Coded Structured Light

Shengyong Chen; Youfu Li; Jianwei Zhang

Structured light vision systems have been successfully used for accurate measurement of 3D surfaces in computer vision. However, their applications are mainly limited to scanning stationary objects so far since tens of images have to be captured for recovering one 3D scene. This paper presents an idea for real-time acquisition of 3D surface data by a specially coded vision system. To achieve 3D measurement for a dynamic scene, the data acquisition must be performed with only a single image. A principle of uniquely color-encoded pattern projection is proposed to design a color matrix for improving the reconstruction efficiency. The matrix is produced by a special code sequence and a number of state transitions. A color projector is controlled by a computer to generate the desired color patterns in the scene. The unique indexing of the light codes is crucial here for color projection since it is essential that each light grid be uniquely identified by incorporating local neighborhoods so that 3D reconstruction can be performed with only local analysis of a single image. A scheme is presented to describe such a vision processing method for fast 3D data acquisition. Practical experimental performance is provided to analyze the efficiency of the proposed methods.


The International Journal of Robotics Research | 2011

Active vision in robotic systems: A survey of recent developments

Shengyong Chen; Youfu Li; Ngai Ming Kwok

In this paper we provide a broad survey of developments in active vision in robotic applications over the last 15 years. With increasing demand for robotic automation, research in this area has received much attention. Among the many factors that can be attributed to a high-performance robotic system, the planned sensing or acquisition of perceptions on the operating environment is a crucial component. The aim of sensor planning is to determine the pose and settings of vision sensors for undertaking a vision-based task that usually requires obtaining multiple views of the object to be manipulated. Planning for robot vision is a complex problem for an active system due to its sensing uncertainty and environmental uncertainty. This paper describes such problems arising from many applications, e.g. object recognition and modeling, site reconstruction and inspection, surveillance, tracking and search, as well as robotic manipulation and assembly, localization and mapping, navigation and exploration. A bundle of solutions and methods have been proposed to solve these problems in the past. They are summarized in this review while enabling readers to easily refer solution methods for practical applications. Representative contributions, their evaluations, analyses, and future research trends are also addressed in an abstract level.


systems man and cybernetics | 2004

Automatic sensor placement for model-based robot vision

Shun Chen; Youfu Li

This paper presents a method for automatic sensor placement for model-based robot vision. In such a vision system, the sensor often needs to be moved from one pose to another around the object to observe all features of interest. This allows multiple three-dimensional (3-D) images to be taken from different vantage viewpoints. The task involves determination of the optimal sensor placements and a shortest path through these viewpoints. During the sensor planning, object features are resampled as individual points attached with surface normals. The optimal sensor placement graph is achieved by a genetic algorithm in which a min-max criterion is used for the evaluation. A shortest path is determined by Christofides algorithm. A Viewpoint Planner is developed to generate the sensor placement plan. It includes many functions, such as 3-D animation of the object geometry, sensor specification, initialization of the viewpoint number and their distribution, viewpoint evolution, shortest path computation, scene simulation of a specific viewpoint, parameter amendment. Experiments are also carried out on a real robot vision system to demonstrate the effectiveness of the proposed method.


international conference on robotics and automation | 2003

Automatic recalibration of an active structured light vision system

Youfu Li; Shengyong Chen

A structured light vision system using pattern projection is useful for robust reconstruction of three-dimensional objects. One of the major tasks in using such a system is the calibration of the sensing system. This paper presents a new method by which a two-degree-of-freedom structured light system can be automatically recalibrated, if and when the relative pose between the camera and the projector is changed. A distinct advantage of this method is that neither an accurately designed calibration device nor the prior knowledge of the motion of the camera or the scene is required. Several important cues for self-recalibration are explored. The sensitivity analysis shows that high accuracy in-depth value can be achieved with this calibration method. Some experimental results are presented to demonstrate the calibration technique.


IEEE Transactions on Industrial Electronics | 2009

Ceiling-Based Visual Positioning for an Indoor Mobile Robot With Monocular Vision

De Xu; Liwei Han; Min Tan; Youfu Li

A regular ceiling is common in many offices. The plentiful parallels and corner points on the ceiling can serve as features for visual positioning for an indoor mobile robot. Based on the natural features on the ceiling, a new visual positioning method is proposed. A camera is mounted on the top of the mobile robot and pointed to the ceiling. At the beginning of visual positioning, the initial orientation and position of the mobile robot in the world frame is estimated with a specified block on the ceiling via perspective-n-point-based positioning method. With the motions of the mobile robot, its global orientation is calculated from the main and secondary lines feature when the ceiling has parallels. In other cases, its global orientation is estimated with point features on the ceiling. Then, its position is recursively computed with the point features. The error analysis and experiments verify the effectiveness of the proposed method.


Pattern Recognition | 2012

Robust visual tracking with structured sparse representation appearance model

Tianxiang Bai; Youfu Li

In this paper, we present a structured sparse representation appearance model for tracking an object in a video system. The mechanism behind our method is to model the appearance of an object as a sparse linear combination of structured union of subspaces in a basis library, which consists of a learned Eigen template set and a partitioned occlusion template set. We address this structured sparse representation framework that preferably matches the practical visual tracking problem by taking the contiguous spatial distribution of occlusion into account. To achieve a sparse solution and reduce the computational cost, Block Orthogonal Matching Pursuit (BOMP) is adopted to solve the structured sparse representation problem. Furthermore, aiming to update the Eigen templates over time, the incremental Principal Component Analysis (PCA) based learning scheme is applied to adapt the varying appearance of the target online. Then we build a probabilistic observation model based on the approximation error between the recovered image and the observed sample. Finally, this observation model is integrated with a stochastic affine motion model to form a particle filter framework for visual tracking. Experiments on some publicly available benchmark video sequences demonstrate the advantages of the proposed algorithm over other state-of-the-art approaches.


IEEE Transactions on Instrumentation and Measurement | 2010

Measurement and Defect Detection of the Weld Bead Based on Online Vision Inspection

Yuan Li; Youfu Li; Qing Lin Wang; De Xu; Min Tan

Weld bead inspection is important for high-quality welding. This paper summarizes our work on weld bead profile measurement, monitoring, and defect detection using a structured light-based vision inspection system. The configuration of the sensor is described and analyzed. In this configuration, the system presented in this paper can easily be calibrated. The image processing and extraction algorithms for laser profiles and feature points are presented. The dimensional parameters of the weld bead are measured, and the weld defects are detected during multilayer welding processes. Experiments using the vision inspection system were conducted with satisfactory results for online inspection.


IEEE Transactions on Instrumentation and Measurement | 1998

On the dynamic behavior of a force/torque sensor for robots

Youfu Li; X.B. Chen

In the past years, many kinds of wrist-force sensors have been designed and developed. However, in the development of such sensors, the dynamic behavior of the sensors has rarely been investigated due to the complexity involved. To provide the designers and users of the sensors with insights into the dynamic performance of a designed sensor, we present in this paper the study of the dynamic behavior of the sensor. First of all, the dynamic behavior of typical sensing elements of the sensor is analyzed. Then a dynamic model of a wrist force sensor is developed with the dynamic behavior of the sensor studied taking into account the effects of the end effector of a robot. Some simulation and experiments are carried out to demonstrate the effectiveness of the dynamic model and highlight the effects of the end effector of a robot on the dynamic behavior of the sensor.


IEEE Transactions on Industrial Informatics | 2012

A Hierarchical Model Incorporating Segmented Regions and Pixel Descriptors for Video Background Subtraction

Shengyong Chen; Jianhua Zhang; Youfu Li; Jianwei Zhang

Background subtraction is important for detecting moving objects in videos. Currently, there are many approaches to performing background subtraction. However, they usually neglect the fact that the background images consist of different objects whose conditions may change frequently. In this paper, a novel hierarchical background model is proposed based on segmented background images. It first segments the background images into several regions by the mean-shift algorithm. Then, a hierarchical model, which consists of the region models and pixel models, is created. The region model is a kind of approximate Gaussian mixture model extracted from the histogram of a specific region. The pixel model is based on the cooccurrence of image variations described by histograms of oriented gradients of pixels in each region. Benefiting from the background segmentation, the region models and pixel models corresponding to different regions can be set to different parameters. The pixel descriptors are calculated only from neighboring pixels belonging to the same object. The experimental results are carried out with a video database to demonstrate the effectiveness, which is applied to both static and dynamic scenes by comparing it with some well-known background subtraction methods.


The International Journal of Robotics Research | 1999

On the Dynamic Stability of Grasping

Caihua Xiong; Youfu Li; Han Ding; Youlun Xiong

Stability is one of the important properties that a robot hand grasp must possess to be able to perform tasks similar to those performed by human hands. This paper discusses the dynamic stability of a grasped object. To analyze the stability of grasps, we build the model of the dynamics of the grasped object in response to the small perturbances. Furthermore, we determine the conditions associated with the dynamic stability and discuss the effects of various factors on the grasp stability. A quantitative measure for evaluating grasps is then presented. Finally, the effectiveness of the proposed theory is verified via examples.

Collaboration


Dive into the Youfu Li's collaboration.

Top Co-Authors

Avatar

Shengyong Chen

Zhejiang University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tianxiang Bai

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Wanliang Wang

Zhejiang University of Technology

View shared research outputs
Top Co-Authors

Avatar

Zhanpeng Shao

Zhejiang University of Technology

View shared research outputs
Top Co-Authors

Avatar

De Xu

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Min Tan

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Yao Guo

City University of Hong Kong

View shared research outputs
Researchain Logo
Decentralizing Knowledge