Chyi-Yeu Lin
National Taiwan University of Science and Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Chyi-Yeu Lin.
robotics and biomimetics | 2009
Chyi-Yeu Lin; Edwin Setiawan
The goal of this research is to recognize an object and its orientation in space by using stereo camera. The principle of object orientation recognition in this paper was based on the Scale Invariant Feature Transform (SIFT) and Support Vector Machine (SVM). SIFT has been successfully implemented on object recognition but it had a problem recognizing the object orientation. For many autonomous robotics applications, such as using a vision-guided industrial robot to grab a product, not only correct object recognition will be needed in this process but also object orientation recognition is required. In this paper we used SVM to recognize object orientation. SVM has been known as a promising method for classification accuracy and its generalization ability. The stereo camera system adopted in this research provided more useful information compared to single camera one. The object orientation recognition technique was implemented on an industrial robot in a real application. The proposed camera system and recognition algorithms were used to recognize a specific object and its orientation and then guide the industrial robot to perform some alignment operations on the object.
Robotics and Autonomous Systems | 2011
Chyi-Yeu Lin; Li-Chieh Cheng; Chang-Kuo Tseng; Hung-Yan Gu; Kuo-Liang Chung; Chin-Shyurng Fahn; Kai-Jay Lu; Chih-Cheng Chang
This research is aimed to devise an anthropomorphic robotic head with a human-like face and a sheet of artificial skin that can read a randomly composed simplified musical notation and sing the corresponding content of the song once. The face robot is composed of an artificial facial skin that can express a number of facial expressions via motions driven by internal servo motors. Two cameras, each of them installed inside each eyeball of the face, provide vision capability for reading simplified musical notations. Computer vision techniques are subsequently used to interpret simplified musical notations and lyrics of their corresponding songs. Voice synthesis techniques are implemented to enable the face robot to sing songs by enunciating synthesized sounds. Mouth patterns of the face robot will be automatically changed to match the emotions corresponding to the lyrics of the songs. The experiments show that the face robot can successfully read and then accurately sing a song which is assigned discriminately.
international conference on advanced intelligent mechatronics | 2009
Chyi-Yeu Lin; Li-Wen Chuang; Thi Thoa Mac
This research aims to develop a human portrait generation system that enables the two-armed humanoid robot, Pica, to autonomously draw the face portrait of the person sitting in front of Pica. This portrait generation system converts a face image captured by the CCD camera installed on the head of Pica, to line segments that constitute a portrait of a good artist quality and are suitable for the robot arm to draw within a short period of time. A selected reduced number of pixel points on the line segments of the portrait are used to control the motion of the robot arm. The control points on the portrait plane are then automatically transformed into the robots coordinates. A PD controller drives the motors of the robot arm to complete the real-time portrait drawing and signature.
ieee-ras international conference on humanoid robots | 2006
Chyi-Yeu Lin; Po-Chia Jo; Chang-Kuo Tseng
This paper presents the intelligent robot, DOC-2, the second upgraded generation of intelligent robot DOC-h DOC-2 is an autonomous multifunctional robot that can teach and entertain people. DOC-2 can speak and spell English words when seeing the word image card. It can also solve simple algebraic problems presented on a white board. DOC-2 is also a simple version of a speaking encyclopedia through which people can learn things easily. In the entertainment function, DOC-2 can play Gobang game, Chess and Chinese Chess with human. DOC-2 has quite superb vision capabilities with which it can recognize human faces in front of it. Furthermore, it can interpret the facial expression of the human face. DOC-2 is capable of recognizing specific persons, its masters. Upon request, DOC-2 will serve tea to its designated master. Schematically identical to its ancestor DOC-1, DOC-2 has two hands, one CCD camera and two driving wheels. However, the mechatronics design is very different form DOC-h In DOC-2, artificial intelligent software system is much complex, and a PDA device is designed to conduct interactive control programs. Using the window-based PDA device, people can communicate with DOC-2 easily. In this paper, we will introduce the specification, mechanism, electronics and intelligent software of DOC-2.
International Journal of Advanced Robotic Systems | 2013
Chyi-Yeu Lin; Li-Chieh Cheng; Chun-Chia Huang; Li-Wen Chuang; Wei-Chung Teng; Chung-Hsien Kuo; Hung-Yan Gu; Kuo-Liang Chung; Chin-Shyurng Fahn
The purpose of this research is to develop multi-talented humanoid robots, based on technologies featuring high-computing and control abilities, to perform onstage. It has been a worldwide trend in the last decade to apply robot technologies in theatrical performance. The more robot performers resemble human beings, the easier it becomes for the emotions of audiences to be bonded with robotic performances. Although all kinds of robots can be theatrical performers based on programs, humanoid robots are more advantageous for playing a wider range of characters because of their resemblance to human beings. Thus, developing theatrical humanoid robots is becoming very important in the field of the robot theatre. However, theatrical humanoid robots need to possess the same versatile abilities as their human counterparts, instead of merely posing or performing motion demonstrations onstage, otherwise audiences will easily become bored. The four theatrical robots developed for this research have successfully performed in a public performance and participated in five programs. All of them were approved by most audiences.
International Journal of Social Robotics | 2013
Li-Chieh Cheng; Chyi-Yeu Lin; Chun-Chia Huang
The static and dynamic realistic effects of the appearance are essential but challenging targets in the development of human face robots. Human facial anatomy is the primary theoretical foundation for designing the facial expressional mechanism in most existent human face robots. Based on the popular study of facial action units, actuators are arranged to connect to certain control points underneath the facial skin in prearranged directions to mimic the facial muscles involved in generating facial expressions. Most facial robots fail to generate realistic facial expressions because there are significant differences in the method of generating expressions between the contracting muscles and inner tissues of human facial skin and the wire pulling of a single artificial facial skin. This paper proposes a unique design approach, which uses reverse engineering techniques of three dimensional measurement and analysis, to visualize some critical facial motion data, including facial skin localized deformations, motion directions of facial features, and displacements of facial skin elements on a human face in different facial expressional states. The effectiveness and robustness of the proposed approach have been verified in real design cases on face robots.
robotics and biomimetics | 2012
Le Duc Hanh; Chyi-Yeu Lin
This research presents a new grasping method in which a 6-DOF industrial robot can autonomously grasp a stationary, randomly positioned rectangular object using a combination of stereo vision and image-based visual servoing with a fuzzy controller (IBVSFC). First, openCV software and a color filter algorithm are used to extract the specific color features of the object. Then, the 3D coordinates of the object to be grasped are derived by the stereo vision algorithm, and the coordinates are used to guide the robotic arm to the approximate location of the object using inverse kinematics. Finally, IBVSFC precisely adjusts the pose of the end-effector to coincide with that of the object to make a successful grasp. The accuracy and robustness of the system and the algorithm were tested and proven to be effective in real scenarios involving a 6-DOF industrial robot. Although the application of this research is limited in grasping a simple cubic object, the same methodology can be easily applied to objects with other geometric shapes.
international conference on advanced intelligent mechatronics | 2008
Chyi-Yeu Lin; Yi-Pin Chiu
The TI DSP (TMS320DM642 EVM) is used as the computation platform in our catcher robot system with two CCDs as source of the stereo vision. The system will separate the thrown-in target from the paired images and then calculate the centroid coordinates of each target image, thereby determining the space location of the object. The Lagrange interpolation formula and the linear function X = a Z + b are used to simulate the ball trajectory and predict the catch position. After that, the control commands are sent by the RS-232 to the 2-D robot arm so as to catch the object at the expected location. This catcher robot system can catch the ball thrown to it from four meter away with a success rate of 65%. In the future, we will implement the robot catcher techniques on our adult-size humanoid robot.
Robotica | 2016
Chyi-Yeu Lin; Chun-Chia Huang; Li-Chieh Cheng
The goal of this research is to develop a low-cost face robot which has a lower degree-of-freedom facial expression mechanism. Many designs of facial robots have been announced and published in the past. Face robots can be classified into two major types based on their respective degrees of freedom. The first type has various facial expressions with higher degrees of freedom, and the second has finite facial expressions with fewer degrees of freedom. Due to the high cost of the higher-degree-of-freedom face robot, most commercial face robot products are designed in the lower-degrees-of-freedom form with finite facial expressions. Therefore, a face robot with a simplified facial expression mechanism is proposed in this research. The main purpose of this research is to develop a device with a lower degree-of-freedom mechanism that is able to generate many facial expressions while keeping one basic mouth shape variation. Our research provides a new face robot example and development direction to reduce costs and conserve energy.
Journal of Intelligent and Robotic Systems | 2011
Chyi-Yeu Lin; Po-Chia Jo; Chang-Kuo Tseng
The compliance mechanisms used on robotic arms can be classified into two major categories: mechanical and electronic. The ideal characteristics of a compliance mechanism include small volume, simple mechanical structure, low cost, large complaint range, and high precision and accuracy under displacement control. Most mechanical compliance mechanisms are able to meet the first three conditions but have a small compliant range and low precision and accuracy under displacement control. The electronic compliance mechanism is hardly limited in the degree of deformation and comes with a higher precision and accuracy under the displacement control, but its sensors are expensive and the system is difficult to control. To combine the advantages of both types, this research aims to develop a new design of compliance mechanism in which a small-scale torque-limiting mechanism with a self-locking feature is installed between the actuator and the arm structure to minimize the volume while providing an ample torque limit. When the robotic arm is overloaded under an external force, a slide will occur inside the compliance mechanism so that the robotic arm will move along the direction of the external force to avoid damage. The robotic arm will automatically return to its original position after the external force is removed. The new compliance mechanism not only exceeds most of the current mechanical designs in the range of compliance but also does not affect the precision and accuracy of the displacement control. Furthermore, the new compliance mechanism does not require any sensors, which will benefit small robotic arms.