Ho Seok Ahn
University of Auckland
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Ho Seok Ahn.
IEEE Transactions on Consumer Electronics | 2009
Ho Seok Ahn; Inkyu Sa; Jin Young Choi
This paper proposes a home automation system using a PDA(Personal Digital Assistants)-based intelligent robot system architecture which consists of three layers; a user layer, a manager layer, and an action layer. In the user layer, users manage and control the robot, and get visual information via the remote monitoring system. In the case of showing maps with the status of the robot, synchronization is very important. In the manager layer, there are three parts; a server part, a home appliances part, and a storage part. The server part manages all the status information of the house, the home appliances part controls all appliances, and the storage system stores all the status information of the house. In the action layer, the main robot system uses a PDA instead of a computer. As a PDA has limited performance, simple algorithms are required. It has an intelligent functional engine for SLAM(Simultaneous Localization And Mapping) and vision processing. We have developed the PDA-based mobile robot system for home automation based on this architecture in order to verify its efficiency.
intelligent robots and systems | 2008
Ho Seok Ahn; Young Min Beak; Inkyu Sa; Woo Sung Kang; Jin Hee Na; Jin Young Choi
This paper presents the design and implementation of a reconfigurable heterogeneous modular architecture for service robots. The proposed architecture has five key concepts which are different from conventional reconfigurable modular service robots; (1) easy and multiple assembly according to requirements of users, (2) hardware resource sharing system with other heterogeneous modules, (3) communication ability among all heterogeneous modules which have different operating systems, (4) automatic connection management system when a new module is attached, (5) automatic software upgrading system for new module software. We explain the three parts of the system architecture which meet the five concepts; mechanical architecture, software architecture, and connection architecture. To verify our architecture, we developed and evaluated a reconfigurable heterogeneous modular service robot by applying the proposed design.
international conference on pattern recognition | 2008
Kwang Moo Yi; Ho Seok Ahn; Jin Young Choi
In this paper, we propose a new method for object tracking based on mean shift algorithm using a kernel which has the shape of the target object, and with probabilistic estimation of the orientation change and scale adaptation. The proposed method uses an object mask to construct a kernel which has the shape of the actual object for tracking. Orientation is adjusted using probabilistic estimation of orientation and scale is adapted using a newly proposed descriptor for scale. Tests results show that the proposed method is robust to background clutter and tracks objects very accurately.
robot and human interactive communication | 2009
Deukey Lee; Ho Seok Ahn; Jin Young Choi
This paper proposes an emotional behavior generator for emotional robots. The traditional methods for generating emotional behavior are not capable of expressing complex emotions, and lack the diversity of emotional expression and generality of system. To solve these problems, we propose a general emotional behavior generation module. It generates behavior combination expressing complex and phased emotions, using the concept of unit behavior and emotional marks. With behavior training sets of the module user, it generates the emotional matrix which represents expression abilities of unit behaviors. And with the emotional matrix, unit behaviors are combined into emotional expression behaviors using Simulated Annealing. To evaluate the results, we apply it to a robot simulator.
International Journal of Humanoid Robotics | 2013
Ho Seok Ahn; Dong-Wook Lee; Dongwoon Choi; Duk-Yeon Lee; Ho-Gil Lee; Moon-Hong Baeg
Human-like appearance and movement of social robots is important in human–robot interaction. This paper presents the hardware mechanism and software architecture of an incarnate announcing robot system called EveR-1. EveR-1 is a robot platform to implement and test emotional expressions and human–robot interactions. EveR-1 is not bipedal but sits on a chair and communicates information by moving its upper body. The skin of the head and upper body is made of silicon jelly to give a human-like texture. To express human-like emotion, it uses body gestures as well as facial expressions decided by a personality model. EveR-1 performs the role of guidance service in an exhibition and does the oral narration of fairy tales and simple conversation with humans.
conference of the industrial electronics society | 2011
Ho Seok Ahn; Dong-Wook Lee; Dongwoon Choi; Duk Yeon Lee; Man Hong Hur; Ho-Gil Lee; Woong Hee Shon
This paper presents the hardware mechanism and software architecture of a singer robot system called EveR-2. EveR-2 is an android robot platform has human-like appearance and shows its emotion with facial expression and gestures. The skin of the head, arms, hands and legs is made of silicon jelly to give human-like texture. EveR-2 has sixty-two degrees of freedom in the head, neck, arms, hands, torso, and legs. It sings a song by reading a music score with lip synchronization. EveR-2 is the first android that made her debut and a music video.
robot and human interactive communication | 2012
Ho Seok Ahn; Dong-Wook Lee; Dongwoon Choi; Duk-Yeon Lee; Manhong Hur; Ho-Gil Lee
There are many theories about basic emotions, and we do not know which emotions are appropriate to use. Also, faces of robots are designed differently and require different ways to embody emotional expressions. Therefore, in this paper we address the appropriate emotions for facial expressions of EveR-4 H33, which is controlled by thirty-three motors for head system. EveR-4 H33 displays her facial expressions for certain emotions selected from typical basic emotion theories. Then, audiences at an exhibition evaluate her facial expressions, by enjoying a game of emotional correction. We analyze the results of the game, and decide appropriate emotions for EveR-4 H33.
robot and human interactive communication | 2007
Ho Seok Ahn; Jin Young Choi
This paper introduces an emotional behavior decision model for intelligent service robots. An emotional model should make different behavior decisions according to the purpose of the robots. We propose an emotional behavior decision model which can change the character of emotional model and make different behavior decisions although the situation and environment remain the same. We defined each emotional element such as reactive dynamics, internal dynamics, emotional dynamics, and behavior dynamics by state dynamic equations. The proposed system model is a linear system. If you want to add one external stimulus or behavior, you need to add just one dimensional vector to the matrix of external stimulus or behavior dynamics. The case of removing is same. The change of reactive dynamics, internal dynamics, emotional dynamics, and behavior dynamics also follows the same procedure. We implemented the proposed emotional behavior decision model and verified its performance.
systems, man and cybernetics | 2012
Ho Seok Ahn; Dong-Wook Lee; Dongwoon Choi; Duk-Yeon Lee; Manhong Hur; Ho-Gil Lee
This paper analyzes emotional expressions of an android head system according to gender and age. We use an EveR-4 H33 controlled by thirty-three motors for facial expression. EveR-4 H33 is a head system for an android face consists of three layers: a mechanical layer, an inner cover layer and an outer cover layer. Facial expressions of robots are different from the purposes of robots. In addition, feeling of emotional expressions is also different from humans depending on age, gender, etc. Therefore, we find the appropriate uses of EveR-4 H33 in this paper. EveR-4 H33 shows her facial expressions about some emotions. Then, audiences of exhibition evaluate her facial expressions by enjoying a game of emotional correction. We analyze the results of the game according to gender and age, and decide appropriate uses of EveR-4 H33.
Artificial Life and Robotics | 2011
Dongwoon Choi; Dong-Wook Lee; Duk Yeon Lee; Ho Seok Ahn; Ho-Gil Lee
In this article, an android robot head is proposed for stage performances. As is well known, an android robot is a type of humanoid robot which is considered to be more like a human than many others types. An android robot has human-like joint structures and artificial skin, and so is the robot which is closest to a human in appearance. To date, several android robots have been developed, but most of them have been made for research purposes or exhibitions. In this article, attention is drawn to the more commercial value of an android robot, especially in the acting field. EveR-3, the android robot described here, has already been used in commercial plays in the theater, and through these it has been possible to learn which features of an android robot are necessary for it to function as an actor. A new 9-DOF head has been developed for stage performances. The DOF are reduced when larger motors are used to make exaggerated expressions, because exaggerated expressions are more important on the stage than detailed, complex expressions. LED lights are installed in both cheeks to emphasize emotional expressions by changes in color in the way that make-up is used to achieve a similar effect on human faces. From these trials, a new head which is more suitable for stage performances has been developed.