Yongxiang Fan
University of California, Berkeley
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Yongxiang Fan.
international conference on advanced intelligent mechatronics | 2016
Te Tang; Hsien-Chung Lin; Yu Zhao; Yongxiang Fan; Wenjie Chen; Masayoshi Tomizuka
Programming robotic assembly tasks usually requires delicate force tuning. In contrast, human may accomplish assembly tasks with much less time and fewer trials. It will be a great benefit if robots can learn the human inherent skill of force control and apply it autonomously. Recent works on Learning from Demonstration (LfD) have shown the possibility to teach robots by human demonstration. The basic idea is to collect the force and corrective velocity that human applies during assembly, and then use them to regress a proper gain for the robot admittance controller. However, many of the LfD methods are tested on collaborative robots with compliant joints and relatively large assembly clearance. For industrial robots, the non-backdrivable mechanism and strict tolerance requirement make the assembly tasks more challenging. This paper modifies the original LfD to be suitable for industrial robots. A new demonstration tool is designed to acquire the human demonstration data. The force control gains are learned by Gaussian Mixture Regression (GMR) and the closed-loop stability is analysed. A series of peg-hole-insertion experiments with H7h7 tolerance on a FANUC manipulator validate the performance of the proposed learning method.
intelligent robots and systems | 2016
Hsien-Chung Lin; Yongxiang Fan; Te Tang; Masayoshi Tomizuka
In the application of physical human-robot interaction (pHRI), the collaboration between human and robot can significantly improve the production efficiency through combination of the humans flexible intelligence and the robots consistent performance. In this application, however, it is an important concern to ensure the safety of the human and the robot. In the human guidance programming scenario, the operator plans a collision-free path for the robot end-effector, but the robot body might collide with an obstacle while being guided by the operator. In this paper, a novel on-line velocity based collision avoidance algorithm is developed to solve the problem in this particular scenario. The proposed algorithm gives an explicit solution to deal with both collision avoidance and human guidance command at the same time, which provides the operator a better and safer lead through programming experience. The real-time experiment is performed on FANUC LR Mate 200 iD/7L in three different obstacle scenarios.
european control conference | 2016
Hsien-Chung Lin; Te Tang; Yongxiang Fan; Yu Zhao; Masayoshi Tomizuka; Wenjie Chen
Industrial robots are playing increasingly important roles in factories. Many production applications require both position and force control; however, tuning the positionforce controller is nontrivial. To simplify this process, the learning from demonstration (LfD) is proposed to transfer the human skills directly into robot applications. However, the current teaching methods, such as direct demonstration, lead through teaching, and teleoperation, all have their own drawbacks. Hence, Remote Lead Through Teaching (RLTT) is proposed to robot learn some tasks from human knowledge and skill. To implement the human skill model, the demonstration data is firstly synchronized by dynamic time warping (DTW), then decomposed into several actions by a support vector machine (SVM) based classifier. Lastly, the learning controller is trained by the Gaussian mixture regression (GMR). The experimental validation is realized on FANUC LR Mate 200ÍD/7L in a H7/h7 peg-hole insertion task and a surface grinding task.
international conference on advanced intelligent mechatronics | 2017
Yongxiang Fan; Liting Sun; Minghui Zheng; Wei Gao; Masayoshi Tomizuka
Dexterous manipulation has broad applications in assembly lines, warehouses and agriculture. To perform broad-scale manipulation tasks, it is desired that a multi-fingered robotic hand can robustly manipulate objects without knowing the exact objects dynamics (i.e. mass and inertia) in advance. However, realizing robust manipulation is challenging due to the complex contact dynamics, the nonlinearities of the system, and the potential sliding during manipulation. In this paper, a dual-stage grasp controller is proposed to handle these challenges. In the first stage, feedback linearization is utilized to linearize the nonlinear uncertain system. Considering the structures of uncertainties, a robust controller is designed for such a linearized system to obtain the desired Cartesian force on the object. In the second stage, a manipulation controller regulates the contact force based on the Cartesian force from the first stage. The dual-stage grasp controller is able to realize robust manipulation without contact modeling, prevent the slippage, and withstand 40% mass and 50% inertia uncertainties. Moreover, it does not require velocity measurement or 3D/6D tactile sensor. Simulation results on Mujoco verify the efficacy of the proposed method. The simulation video is available at [1].
2017 IEEE Conference on Control Technology and Applications (CCTA) | 2017
Hsien-Chung Lin; Changliu Liu; Yongxiang Fan; Masayoshi Tomizuka
Safety is a fundamental issue in robotics, especially in the growing application of human-robot interaction (HRI), where collision avoidance is an important consideration. In this paper, a novel real-time velocity based collision avoidance planner is presented to address this problem. The proposed algorithm provides a solution to deal with both collision avoidance and reference tracking simultaneously. An invariant safe set is introduced to exclude the dangerous states that may lead to collision, and a smoothing function is introduced to adapt different reference commands and to preserve the invariant property of the safe set. A real-time experiment with a moving obstacle is conducted on FANUC LR Mate 200iD/7L.
IFAC-PapersOnLine | 2017
Yongxiang Fan; Wei Gao; Wenjie Chen; Masayoshi Tomizuka
arXiv: Robotics | 2018
Yongxiang Fan; Hsien-Chung Lin; Te Tang; Masayoshi Tomizuka
arXiv: Robotics | 2018
Yongxiang Fan; Hsien-Chung Lin; Te Tang; Masayoshi Tomizuka
arXiv: Robotics | 2018
Yongxiang Fan; Te Tang; Hsien-Chung Lin; Masayoshi Tomizuka
arXiv: Artificial Intelligence | 2018
Yongxiang Fan; Jieliang Luo; Masayoshi Tomizuka