Chia-How Lin
National Chiao Tung University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Chia-How Lin.
IEEE Transactions on Systems, Man, and Cybernetics | 2013
Meng-Ju Han; Chia-How Lin; Kai-Tai Song
This paper presents a method of mood transition design of a robot for autonomous emotional interaction with humans. A 2-D emotional model is proposed to combine robot emotion, mood, and personality in order to generate emotional expressions. In this design, the robot personality is programmed by adjusting the factors of the five factor model proposed by psychologists. From Big Five personality traits, the influence factors of robot mood transition are determined. Furthermore, a method to fuse basic robotic emotional behaviors is proposed in order to manifest robotic emotional states via continuous facial expressions. An artificial face on a screen is a way to provide a robot with a humanlike appearance, which might be useful for human-robot interaction. An artificial face simulator has been implemented to show the effectiveness of the proposed methods. Questionnaire surveys have been carried out to evaluate the effectiveness of the proposed method by observing robotic responses to a users emotional expressions. Preliminary experimental results on a robotic head show that the proposed mood state transition scheme appropriately responds to a users emotional changes in a continuous manner.
intelligent robots and systems | 2010
Chia-How Lin; Sin-Yi Jiang; Yueh-Ju Pu; Kai-Tai Song
This paper presents a vision-based obstacle avoidance design using a monocular camera onboard a mobile robot. An image processing procedure is developed to estimate distances between the robot and obstacles based-on inverse perspective transformation (IPT) in image plane. A robust image processing solution is proposed to detect and segment navigatable ground plane area within the camera view. The proposed method integrate robust feature matching with adaptive color segmentation for plane estimation and tracking to cope with variations in illumination and camera view. After IPT and ground region segmentation, a result similar to the occupancy grid map is obtained for mobile robot obstacle avoidance and navigation. Practical experimental results of a wheeled mobile robot show that the proposed imaging system successfully estimates distance of objects and avoid obstacles in an indoor environment.
systems, man and cybernetics | 2008
Chia-How Lin; Su-Hen Yang; Hong-Tze Chen; Kai-Tai Song
This paper presents an intruder detection system which consists of a Zigbee sensor network and an autonomous mobile robot. Multiple Zigbee sensor modules installed in the environment can detect intruders and abnormal conditions. Sensor nodes transmit intrusion alarm to the monitoring center as well as the mobile robot via Zigbee wireless mesh network. The robot can navigate in the environment autonomously and approach to a target place using onboard localization system. If any possible intruder is detected, the robot immediately moves to that location and transmits images to remote mobile devices such as PDA or smart phone of security guards and users, in order to determine and response to the situation in real time.
systems, man and cybernetics | 2005
Chia-How Lin; Kai-Tai Song; Gary T. Anderson
This paper presents an agent-based robot control (ARC) architecture. ARC features a flexible real-time control system, which is suitable for multi-robot cooperative tasks. It also provides an efficient platform for building up a multi-robot system consisting of heterogeneous robots. In this paper, an experimental study of this architecture is investigated. A cooperative exploration using two mobile robots will be demonstrated. In this experiment, one robot explores the environment by looking for a color-coded target and the other is responsible for task execution at the target position. While exploring in an unknown environment, the first robot, which is equipped with ultrasonic sensors for exploration, records its position as it sees deployed checkpoints. In a later phase, the second robot plans a path to the target directly using information passed from the first robot and get to the target position in an efficient way.
systems, man and cybernetics | 2009
Meng-Ju Han; Chia-How Lin; Kai-Tai Song
This paper presents a novel design of autonomous robotic facial expression generation using a fuzzy-neuro network. The emotional characteristics of a robot can be specified in this design by assigning weights in the fuzzy-neuro network. The robotic face generates appropriate expressions in responding to the image recognition results of a users emotional state and the assigned robotic emotional characteristics. In this study, two contrast emotional characteristics, optimistic and pessimistic, have been designed and tested to verify their responses to the recognized user emotional states (neutral, happiness, sadness and anger). Computer simulations of a robotic face show that the proposed facial expression generation system effectively responds to a users emotional states in a human-like manner by fusing outputs of four facial expressions (boring, happiness, sadness and surprise).
systems man and cybernetics | 2014
Chia-How Lin; Kai-Tai Song
For an on-demand robotic system, a location aware module provides location information of objects, users, and the mobile robot itself. This information supports various intelligent behaviors of a service robot in day-to-day scenarios. This paper presents a novel probability-based approach to building a location aware system. With this approach, the inconsistencies often seen in received signal strength indicator (RSSI) measurements are handled with a minimum of calibration. By taking off-line calibration measurement of a ZigBee sensor network, the inherent problem of signal uncertainty of to-be-localized nodes can be effectively resolved. The proposed RSSI-based algorithm allows flexible deployment of sensor nodes in various environments. The proposed algorithm has been verified in several typical environments and experiments show that the method outperforms existing algorithms. The location aware system has been integrated with an autonomous mobile robot to demonstrate the proposed on-demand robotic intruder detection system. In the experiments, three alarm sensors were employed to monitor abnormal conditions. If an intrusion was detected, the robot immediately moves to the location and transmits scene images to the user, allowing the user to respond to the situation in real time.
intelligent robots and systems | 2004
Chia-How Lin; Kai-Tai Song
This paper presents a multi-agent approach to developing a flexible real-time control system for an autonomous mobile robot. The main purpose of this study is to integrate heterogeneous algorithms and functions onboard the robot, while still to guarantee a reasonable responding time. The balance between generality and flexibility is also addressed. This strategy provides a framework for developing complex intelligent machines through teamwork of several people. The proposed control system has been implemented on an experimental robot based on a real-time Linux platform. Practical experiments show that the robot successfully navigates through complex environments by combining visual tracking, obstacle avoidance, and command receiving behaviors in a single system.
Robotica | 2015
Chia-How Lin; Kai-Tai Song
This paper presents a vision-based obstacle avoidance design using a monocular camera onboard a mobile robot. A novel image processing procedure is developed to estimate the distance between the robot and obstacles based-on inverse perspective transformation (IPT) in an image plane. A robust image processing solution is proposed to detect and segment a drivable ground area within the camera view. The proposed method integrates robust feature matching with adaptive color segmentation for plane estimation and tracking to cope with variations in illumination and camera view. After IPT and ground region segmentation, distance measurement results are obtained similar to those of a laser range finder for mobile robot obstacle avoidance and navigation. The merit of this algorithm is that the mobile robot can have the capacity of path finding and obstacle avoidance by using a single monocular camera. Practical experimental results on a wheeled mobile robot show that the proposed imaging system successfully obtains distances of surrounding objects for reactive navigation in an indoor environment.
robot and human interactive communication | 2008
Chia-How Lin; Chia-Hsing Yang; Cheng-Kang Wang; Kai-Tai Song; Jwu-Sheng Hu
Human detection and tracking is important for user-friendly human-robot interaction. The robot should be able to find the user autonomously and keep its attention to the user in a human-like manner. In this paper, a design and experimental study of robust human detection and tracking is presented through fusion several modalities of sensory information. The multi-modal interaction design utilizes a combination of visual, audio, and laser scanner data for reliable detection and tracking of an interested user. During tracking motion, obstacle avoidance behavior will be activated any time required to ensure safety. Furthermore, user can further assign the robot to interact with other user by speech command. Experimental results show that the robot can robustly tracks person under complex scenarios.
international conference on system science and engineering | 2010
Kai-Tai Song; Che-Hao Chang; Chia-How Lin
This paper presents a novel design of visual servo control of a mobile manipulator for autonomous grasping of a target object. In this design, scale invariant feature transform (SIFT) algorithm is adopted to search and recognize the object to grasp. Random sample consensus (RANSAC) algorithm is used to remove outliers and find the refined homography matrix between database and current image. Robust feature matching provides reliable feature points to the image-based visual servo control loop. Experimental results show that the mobile manipulator can find and grasp a target object autonomously using the proposed method.