Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Haoxiang Lang is active.

Publication


Featured researches published by Haoxiang Lang.


IEEE-ASME Transactions on Mechatronics | 2010

A Hybrid Visual Servo Controller for Robust Grasping by Wheeled Mobile Robots

Ying Wang; Haoxiang Lang; C.W. de Silva

This paper develops a robust vision-based mobile manipulation system for wheeled mobile robots (WMRs). In particular, this paper addresses the retention of visual features in the field of view of the camera, which is an important robustness issue in visual servoing. First, the classical approach of image-based visual servoing (IBVS) for fixed-base manipulators is extended to WMRs and a control law with Lyapunov stability is determined. Second, in order to guarantee visibility of visual features, an innovative controller with machine learning using Q-learning is proposed, which can learn its behavior policy and autonomously improve its performance. Third, a hybrid controller for robust mobile manipulation is developed to integrate the IBVS controller and the Q-learning controller through a rule-based arbitrator. This is thought to be the first paper that integrates reinforcement learning or Q-learning with visual servoing to achieve robust operation. Experiments are carried out to validate the approaches developed in this paper. The experimental results show that the new hybrid controller developed here possesses the capabilities of self-learning and fast response, and provides a balanced performance with respect to robustness and accuracy.


international conference on automation and logistics | 2008

Mobile robot localization and object pose estimation using optical encoder, vision and laser sensors

Haoxiang Lang; Ying Wang; C.W. de Silva

A key problem of a mobile robot system is how to localize itself and detect objects in the workspace. In this paper, a multiple sensor based robot localization and object pose estimation method is presented. First, optical encoders and odometry model are utilized to determine the pose of the mobile robot in the workspace, with respect to the global coordinate system. Next, a CCD camera is used as a passive sensor to find an object (a box) in the environment, including the specific vertical surfaces of the box. By identifying and tracking color blobs which are attached to the center of each vertical surface of the box, the robot rotates and adjusts its base pose to move the color blob into the center of the camera view in order to make sure that the box is in the range of the laser scanner. Finally, a laser range finder, which is mounted on the top of the mobile robot, is activated to detect and compute the distances and angles between the laser source and laser contact surface on the box. Based on the information acquired in this manner, the global pose of the robot and the box can be represented using the homogeneous transformation matrix. This approach is validated using the Microsoft Robotics Studio simulation environment.


Robotics and Autonomous Systems | 2014

A modified image-based visual servo controller with hybrid camera configuration for robust robotic grasping

Ying Wang; Guan-Lu Zhang; Haoxiang Lang; Bashan Zuo; Clarence W. de Silva

In order to develop an autonomous mobile manipulation system that works in an unstructured environment, a modified image-based visual servo (IBVS) controller using hybrid camera configuration is proposed in this paper. In particular, an eye-in-hand web camera is employed to visually track the target object while a stereo camera is used to measure the depth information online. A modified image-based controller is developed to utilize the information from the two cameras. In addition, a rule base is integrated into the visual servo controller to adaptively tune its gain based on the image deviation data so as to improve the response speed of the controller. A physical mobile manipulation system is developed and the developed IBVS controller is implemented. The experimental results obtained using the systems validate the developed approach.


international conference on control and automation | 2010

Vision based object identification and tracking for mobile robot visual servo control

Haoxiang Lang; Ying Wang; W. de Silva Clarence

A key problem of an Image Based Visual Servo (IBVS) system is how to identify and track objects in a series of images. In this paper, a scale-invariant image feature detector and descriptor, which is called the Scale-Invariant Feature Transform (SIFT), is utilized to achieve robust object tracking in terms of rotation, scaling and changes of illumination. To the best of our knowledge, this paper represents the first work to apply the SIFT algorithm to visual servoing for robust mobile robot tracking. First, a SIFT method is used to generate the feature points of an object template and a series of images are acquired while the robot is moving. Second, a feature matching method is applied to match the features between the template and the images. Finally, based on the locations of the matched feature points, the location of the object is approximated in the images of camera views. This algorithm of object identification and tracking is applied in an Image-Based Visual Servo (IBVS) system for providing the location of the object in the feedback loop. In particular, the IBVS controller determines the desired wheel speeds ω_1 and ω_2 of a wheeled mobile robot, and accordingly commands the low-level controller of the robot. Then the IBVS controller drives the robot toward a target object until the location of the object reaches the desired location in the image. The IBVS system is implemented and tested in a mobile robot with an on-board camera, in our laboratory. The results are used to demonstrate satisfactory performance of the object identification and tracking algorithm. Furthermore, a MATLAB simulation is used to confirm the stability and convergence of the IBVS controller.


International Journal of Information Acquisition | 2008

FAULT DIAGNOSIS OF AN INDUSTRIAL MACHINE THROUGH SENSOR FUSION

Haoxiang Lang; Clarence W. de Silva

In this paper, a four layer neuro-fuzzy architecture of multi-sensor fusion is developed for a fault diagnosis system which is applied to an industrial fish cutting machine. An important characteristic of the fault diagnosis approach developed in this paper is to make an accurate decision of the machine condition by fusing information acquired from three types of sensors: Accelerometer, microphone and charge-coupled device (CCD) camera. Feature vectors for vibration and sound signals from their fast Fourier transform (FFT) frequency spectra are defined and extracted from the acquired information. A feature-based vision method is applied for object tracking in the machine, to detect and track the fish moving on the conveyor. A four-layer neural network including a fuzzy hidden layer is developed in the paper to analyze and diagnose existing faults. Feature vectors of vibration, sound and vision are provided as inputs to the neuro-fuzzy network for fault detection and diagnosis. By proper training of the neural network using data samples for typical faults, six crucial faults in the fish cutting machine are detected with high reliability and robustness. On this basis, not only the condition of the machine can be determined for possible retuning and maintenance, but also alarms to warn about impending faults may be generated during the machine operation.


international conference on automation and logistics | 2008

Visual servo control and parameter calibration for mobile multi-robot cooperative assembly tasks

Ying Wang; Haoxiang Lang; C.W. de Silva

In the recent years visual servo control has become a popular research topic in robotics. Usually, it is applied to fixed-base robotic manipulators working in a structured industrial environment. This paper focuses on developing a visual servo control system for a mobile robot, which operates in an unstructured environment. First, the system hardware and control objective are introduced. Second, the system kinematic model and the coordinate transformation issues are studied. Finally, a visual servo controller is derived based on the Lypunov method, which guarantees asymptotic stability of the closed-loop system. In addition, the issue of camera calibration of the visual servo control system is discussed in the paper. Through manually measuring some sample points in the real world and their corresponding pixel coordinates in the image, the extrinsic and intrinsic camera parameters are identified and calibrated. The developed mobile visual servo control system is intended for use in a multi-robot cooperative assembly task.


Unmanned Systems | 2013

Developments in Visual Servoing for Mobile Manipulation

Haoxiang Lang; Muhammad Tahir Khan; K.K. Tan; Clarence W. de Silva

A new trend in mobile robotics is to integrate visual information in feedback control for facilitating autonomous grasping and manipulation. The result is a visual servo system, which is quite beneficial in autonomous mobile manipulation. In view of mobility, it has wider application than the traditional visual servoing in manipulators with fixed base. In this paper, the state of art of vision-guided robotic applications is presented along with the associated hardware. Next, two classical approaches of visual servoing: image-based visual servoing (IBVS) and position-based visual servoing (PBVS) are reviewed; and their advantages and drawbacks in applying to a mobile manipulation system are discussed. A general concept of modeling a visual servo system is demonstrated. Some challenges in developing visual servo systems are discussed. Finally, a practical application of mobile manipulation system which is developed for applications of search and rescue and homecare robotics is introduced.


systems, man and cybernetics | 2014

Vision based robotic grasping with a hybrid camera configuration.

Ying Wang; Bashan Zuo; Haoxiang Lang

Vision based robotic grasping systems have significant applications in the areas of space exploration and service robots. In this paper, an image based visual servoing controller with a hybrid camera configuration is proposed. In particular, an eye-in-hand camera is employed to track the position of the target object while a stereo camera is utilized to obtain the depth information of the object. A physical vision-based mobile manipulation system is developed to validate the proposed approach. The experimental results show that the proposed controller is effective to help a robot grasp the target object autonomously.


international conference on control and automation | 2010

An autonomous mobile grasping system using visual servoing and nonlinear model predictive control

Ying Wang; Haoxiang Lang; W. de Silva Clarence

This paper presents an autonomous grasping system using visual servoing and mobile robots. While this kind of system has many potential significant applications, there have been several key challenges, for example, localization accuracy, visibility and velocity constraints, obstacle avoidance, and so on, to prevent the implementation of such a system. The main contribution of this paper is to develop an adaptive nonlinear model predictive controller (NMPC) to meet all these challenges in one single controller. In particular, the model of the vision-based mobile grasping system is first derived. Then, based on the model, a nonlinear predictive control strategy with vision feedback is proposed to deal with the issues of optimal control and constraints simultaneously. Different from other work in this field, in order to improve the performance, an adaptive mechanism is proposed in the paper to update the model online so that it can track the nonlinear time-varying plant in a real time manner. To the best of our knowledge, this is the first work to apply model predictive control to mobile visual servoing and consider various constraints at the same time. The approach was validated with two physical experiments. It was shown that the system with the new control strategy was quite successful to carry out an autonomous mobile grasping task in a complex environment.


Control and Intelligent Systems | 2012

VISION-BASED GRASPING USING MOBILE ROBOTS AND NONLINEAR MODEL PREDICTIVE CONTROL

Ying Wang; Haoxiang Lang; Han Lin; Clarence W. de Silva

This paper presents an autonomous grasping system using visual servoing and mobile robots. While this kind of system has many potential significant applications, there have been several key challenges, for example, localization accuracy, visibility and velocity constraints, obstacle avoidance, and so on, to prevent the implementation of such a system. The main contribution of this paper is to develop an adaptive nonlinear model predictive controller to meet all these challenges in one single controller. In particular, the model of the vision-based mobile grasping system is first derived. Then, based on the model, a nonlinear predictive control strategy with vision feedback is proposed to deal with the issues of optimal control and constraints simultaneously. Different from other work in this field, to improve the performance, an adaptive mechanism is proposed in the paper to update the model online so that it can track the nonlinear time-varying plant in a real time manner. To the best of our knowledge, this is the first work to apply model predictive control to mobile visual servoing and consider various constraints at the same time. The approach is validated with two physical experiments. It is shown that the system with the new control strategy is quite successful to carry out an autonomous mobile grasping task in a complex environment.

Collaboration


Dive into the Haoxiang Lang's collaboration.

Top Co-Authors

Avatar

Ying Wang

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Clarence W. de Silva

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

C.W. de Silva

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Ying Wang

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

W. de Silva Clarence

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Muhammad Tahir Khan

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Bashan Zuo

Southern Polytechnic State University

View shared research outputs
Top Co-Authors

Avatar

K.K. Tan

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar

Guan-Lu Zhang

University of British Columbia

View shared research outputs
Researchain Logo
Decentralizing Knowledge