Sin-Yi Jiang
National Chiao Tung University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Sin-Yi Jiang.
intelligent robots and systems | 2010
Chia-How Lin; Sin-Yi Jiang; Yueh-Ju Pu; Kai-Tai Song
This paper presents a vision-based obstacle avoidance design using a monocular camera onboard a mobile robot. An image processing procedure is developed to estimate distances between the robot and obstacles based-on inverse perspective transformation (IPT) in image plane. A robust image processing solution is proposed to detect and segment navigatable ground plane area within the camera view. The proposed method integrate robust feature matching with adaptive color segmentation for plane estimation and tracking to cope with variations in illumination and camera view. After IPT and ground region segmentation, a result similar to the occupancy grid map is obtained for mobile robot obstacle avoidance and navigation. Practical experimental results of a wheeled mobile robot show that the proposed imaging system successfully estimates distance of objects and avoid obstacles in an indoor environment.
international conference on mechatronics and automation | 2013
Sin-Yi Jiang; Kai-Tai Song
This paper presents a point-to-point pose tracking controller design for a steer-and-drive omnidirectional mobile robot. A method is proposed for smooth position and orientation tracking of the omnidirectional mobile platform based on a differential flatness approach. This method is full-state controllable, which can plan a smooth trajectory for the mobile platform to cope with the problem of limited working space in an indoor environment. It features exploiting unique features of omnidirectional mobility of the motion platform. Experimental results of a four-wheeled steer-and-drive mobile platform validate the effectiveness of the proposed method.
international conference on mechatronics and automation | 2011
Kai-Tai Song; Sin-Yi Jiang
This paper presents a novel force-cooperative guidance controller design for a walking assistive robot such that it can guide an elderly or handicapped person in an indoor environment. The guidance system has been implemented on a walking helper robot (Walbot) to allow the robot to incorporate passive behaviors. A handrail was provided for Walbot to facilitate passive walking assistance using an omni-directional mobile platform. A laser scanner is used for environment detection and obstacle avoidance. A mass/damper model regulates robot velocity with respect to applied external force. In this paper, we propose a method to fuse output from the compliant motion controller and the autonomous navigation controller in order to give the robotic walker a safe and passive walking guidance. Force-cooperative guidance is achieved by giving proper output-fusing weights to each controller such that the walking help robot can follow the motion intent of the user while avoiding obstacles appearing in the environment.
IEEE Transactions on Control Systems and Technology | 2017
Sin-Yi Jiang; Chen-Yang Lin; Ko-Tung Huang; Kai-Tai Song
This brief presents a control design for a walking-assistant robot in a complex indoor environment, such that it can assist a walking-impaired person to walk and avoid unexpected obstacles. In this design, the robot motion is a resultant of autonomous navigation and compliant motion control. The compliance motion controller allows the robot to possess passive behavior following the motion intent of the user, while the autonomous guidance gives safe navigation of the robot without colliding with any obstacles. A shared-control approach is suggested to combine the passive compliant behavior and safe guidance of the robot. When a user exerts force to the robot, the mobile platform responds to adjust the speed in compliance with the user movement. On the other hand, the autonomous navigation controller is designed to provide collision-free guidance. Using the developed shared controller, outputs of the compliance motion controller and autonomous navigation controller are fused to generate appropriate motion for the robot. In this manner, passive behavior allows the walking-assistant robot to adapt to a user’s motion intent and move in compliance with user. Meanwhile, the active guidance adjusts the linear velocity and the direction of the robot in real time in response to the environmental data received from the on-board laser scanner. The developed algorithms have been implemented on a self-constructed walking-assistant robot. Experimental results validate the proposed design and demonstrate that the robot can actively avoid unexpected obstacles while move passively following the user to the destination.
IEEE Transactions on Human-Machine Systems | 2016
Kai-Tai Song; Sin-Yi Jiang; Ming-Han Lin
This paper presents a shared-control-based design for teleoperation of a dual-arm omnidirectional mobile robot-a robot that can execute tasks commanded by a remotely user, or that can operate autonomously by adapting to its immediate environment, or that can operate in both modes simultaneously. To achieve this capability for the robot and for its effective remote control, a shared-control method has been developed. The controller determines the control gains for both the human and the autonomous commands by computing the users confidence factor. For ease of use, a smartphone/tablet is used to control the robot. Using this approach, the robot allows the user to manipulate its motion from a remote site and compensates for a lack of local human knowledge. Practical experiments validate the proposed design and demonstrate the potential of the proposed shared-control method for home-service tasks.
conference on automation science and engineering | 2014
Sin-Yi Jiang; Nelson Yen-Chung Chang; Chin-Chia Wu; Cheng-Hei Wu; Kai-Tai Song
In this paper, we investigate the performance of KinectFusion algorithm for 3D reconstruction using a Kinect sensor. A sensor model is applied to generate depth image to evaluate accuracy of the algorithm. To obtain ground truth of depth image as well as camera pose, we generate depth data based on a CAD model in a simulation program. In the error analysis, deferent types of noise, including depth image noise and pose-predict noise are added to examine the errors of 3D reconstruction and camera localization. It is found that the KinectFusion algorithm is more robust to depth image noise, but lack robustness against pose-predict noise. Experiments of 3D reconstruction have been carried out on a Kuka 6-DOF robot manipulator with an eye-in-hand configuration. Experimental results validate the simulation results of error-analysis.
international automatic control conference | 2014
Kai-Tai Song; Sin-Yi Jiang; Ming-Han Lin; Cheng-Hei Wu; Yi-Fu Chiu; Wei-Che Lin; Shang-Yang Wu; Chien-Yu Wu
In this paper we propose a shared-control based teleoperation design for a dual-arm omnidirectional mobile robot. The robot not only can adapt its motion to environment autonomously, but also move according to the users remote commands. To achieve this goal, a shared-control scheme for effective remote control of the mobile robot is developed. For tele-presence and remote control, a user interface is designed for a smart phone/tablet. The proposed controller determines the human and autonomy control gains by computing users confidence factor. In this approach, the robot assists a user to manipulate its motion from a remote site by compensating local insufficiency of human remote control. Practical experiments validate the proposed design and demonstrate that the proposed shared-control approach of a dual-arm omnidirectional mobile robot can have potential use for future home service tasks.
international automatic control conference | 2013
Kai-Tai Song; Sin-Yi Jiang; Ching-Jui Wu; Ming-Han Lin; Cheng Hei Wu; Yi-Fu Chiu; Chia-How Lin; Chao-Yu Lin; Chien-Hung Liu
This paper presents a mobile manipulation and visual servoing design for a configurable mobile manipulator by using a Kinect sensor. To achieve this goal, an image-based grasping design is combined with a visual simultaneous localization and mapping (vSLAM). The proposed vSLAM method is based on extended Kalman filter (EKF) and a Kinect RGBD sensor. In the manipulator design, the robot arms are designed to be able to configure as a handrail for user support in execution of walking assistance tasks. The purpose of this design is to perform mobile manipulation and walking assistance tasks on the same robot. Experiments on the prototype dual-arm mobile manipulator validate that the proposed method can find and grasp a target object after navigation a distance to the object position. After the robot delivered the object to the user, it configured to provide walking assist to the user using the suggested compliance controller.
IEEE-ASME Transactions on Mechatronics | 2017
Kai-Tai Song; Sin-Yi Jiang; Shang-Yang Wu
This paper presents a motion control design for a walking-assistant robot by combining passive-compliant behavior and active obstacle avoidance. The proposed method recognizes the intended motion for a user using gait movement and the applied force of a hand. A passive-compliant motion control is developed that allows the user to safely walk with the robot. The obstacle avoidance controller actively controls the velocity to guide the user in a direction that avoids collisions. A shared-control scheme combines active obstacle avoidance and passive-compliant motion commands. The experimental results verify the effectiveness of the proposed method and show that the robot allows a user to walk safely in a complex environment. A questionnaire survey of elderly users shows that the proposed design for a walking-assistant robot gives satisfactory performance.
Journal of Intelligent and Robotic Systems | 2017
Kai-Tai Song; Cheng-Hei Wu; Sin-Yi Jiang
This paper presents a CAD-based six-degrees-of-freedom (6-DoF) pose estimation design for random bin picking for multiple objects. A virtual camera generates a point cloud database for the objects using their 3D CAD models. To reduce the computational time of 3D pose estimation, a voxel grid filter reduces the number of points for the 3D cloud of the objects. A voting scheme is used for object recognition and to estimate the 6-DoF pose for different objects. An outlier filter filters out badly matching poses so that the robot arm always picks up the upper object in the bin, which increases the success rate. In a computer simulation using a synthetic scene, the average recognition rate is 97.81 % for three different objects with various poses. A series of experiments have been conducted to validate the proposed method using a Kuka robot arm. The average recognition rate for three objects is 92.39 % and the picking success rate is 89.67 %.