Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Dongwoon Choi is active.

Publication


Featured researches published by Dongwoon Choi.


International Journal of Humanoid Robotics | 2013

DEVELOPMENT OF AN INCARNATE ANNOUNCING ROBOT SYSTEM USING EMOTIONAL INTERACTION WITH HUMANS

Ho Seok Ahn; Dong-Wook Lee; Dongwoon Choi; Duk-Yeon Lee; Ho-Gil Lee; Moon-Hong Baeg

Human-like appearance and movement of social robots is important in human–robot interaction. This paper presents the hardware mechanism and software architecture of an incarnate announcing robot system called EveR-1. EveR-1 is a robot platform to implement and test emotional expressions and human–robot interactions. EveR-1 is not bipedal but sits on a chair and communicates information by moving its upper body. The skin of the head and upper body is made of silicon jelly to give a human-like texture. To express human-like emotion, it uses body gestures as well as facial expressions decided by a personality model. EveR-1 performs the role of guidance service in an exhibition and does the oral narration of fairy tales and simple conversation with humans.


conference of the industrial electronics society | 2011

Development of an android for singing with facial expression

Ho Seok Ahn; Dong-Wook Lee; Dongwoon Choi; Duk Yeon Lee; Man Hong Hur; Ho-Gil Lee; Woong Hee Shon

This paper presents the hardware mechanism and software architecture of a singer robot system called EveR-2. EveR-2 is an android robot platform has human-like appearance and shows its emotion with facial expression and gestures. The skin of the head, arms, hands and legs is made of silicon jelly to give human-like texture. EveR-2 has sixty-two degrees of freedom in the head, neck, arms, hands, torso, and legs. It sings a song by reading a music score with lip synchronization. EveR-2 is the first android that made her debut and a music video.


robot and human interactive communication | 2012

Appropriate emotions for facial expressions of 33-DOFs android head EveR-4 H33

Ho Seok Ahn; Dong-Wook Lee; Dongwoon Choi; Duk-Yeon Lee; Manhong Hur; Ho-Gil Lee

There are many theories about basic emotions, and we do not know which emotions are appropriate to use. Also, faces of robots are designed differently and require different ways to embody emotional expressions. Therefore, in this paper we address the appropriate emotions for facial expressions of EveR-4 H33, which is controlled by thirty-three motors for head system. EveR-4 H33 displays her facial expressions for certain emotions selected from typical basic emotion theories. Then, audiences at an exhibition evaluate her facial expressions, by enjoying a game of emotional correction. We analyze the results of the game, and decide appropriate emotions for EveR-4 H33.


systems, man and cybernetics | 2012

Uses of facial expressions of android head system according to gender and age

Ho Seok Ahn; Dong-Wook Lee; Dongwoon Choi; Duk-Yeon Lee; Manhong Hur; Ho-Gil Lee

This paper analyzes emotional expressions of an android head system according to gender and age. We use an EveR-4 H33 controlled by thirty-three motors for facial expression. EveR-4 H33 is a head system for an android face consists of three layers: a mechanical layer, an inner cover layer and an outer cover layer. Facial expressions of robots are different from the purposes of robots. In addition, feeling of emotional expressions is also different from humans depending on age, gender, etc. Therefore, we find the appropriate uses of EveR-4 H33 in this paper. EveR-4 H33 shows her facial expressions about some emotions. Then, audiences of exhibition evaluate her facial expressions by enjoying a game of emotional correction. We analyze the results of the game according to gender and age, and decide appropriate uses of EveR-4 H33.


Artificial Life and Robotics | 2011

Design of an android robot head for stage performances

Dongwoon Choi; Dong-Wook Lee; Duk Yeon Lee; Ho Seok Ahn; Ho-Gil Lee

In this article, an android robot head is proposed for stage performances. As is well known, an android robot is a type of humanoid robot which is considered to be more like a human than many others types. An android robot has human-like joint structures and artificial skin, and so is the robot which is closest to a human in appearance. To date, several android robots have been developed, but most of them have been made for research purposes or exhibitions. In this article, attention is drawn to the more commercial value of an android robot, especially in the acting field. EveR-3, the android robot described here, has already been used in commercial plays in the theater, and through these it has been possible to learn which features of an android robot are necessary for it to function as an actor. A new 9-DOF head has been developed for stage performances. The DOF are reduced when larger motors are used to make exaggerated expressions, because exaggerated expressions are more important on the stage than detailed, complex expressions. LED lights are installed in both cheeks to emphasize emotional expressions by changes in color in the way that make-up is used to achieve a similar effect on human faces. From these trials, a new head which is more suitable for stage performances has been developed.


international conference on social robotics | 2012

Difference of efficiency in human-robot interaction according to condition of experimental environment

Ho Seok Ahn; Dong-Wook Lee; Dongwoon Choi; Duk-Yeon Lee; Manhong Hur; Ho-Gil Lee

Human-Robot Interaction is most important function for social robot systems. Android robot systems, which have human-like appearance, are used for interaction with humans because they have the merit of showing their emotions by similar way to humans. Lots of these robot systems are developed and verified their efficiency and performance by analyzing the experimental results from questionnaire method. However, the results from questionnaire method can be different from many conditions. In this paper, we analyze the difference of experimental results from questionnaire method by comparing three groups: the first group gains benefits by competition, the second group gains benefits without competition, and the last group does not gain anything. For these experiments, android head system EveR-4 H33, which has 33 motors inside of head to show its facial expressions, is used.


ieee-ras international conference on humanoid robots | 2012

Designing of android head system by applying facial muscle mechanism of humans

Ho Seok Ahn; Dong-Wook Lee; Dongwoon Choi; Duk-Yeon Lee; Manhong Hur; Ho-Gil Lee

The facial system plays an important role in human-robot interaction. EveR-4 H33 is a head system for an android face controlled by thirty-three motors. It consists of three layers: a mechanical layer, an inner cover layer and an outer cover layer. Motors are attached under the skin and some motors are correlated with each other. Some expressions cannot be shown by moving just one motor. In addition, moving just one motor can cause damage to other motors or the skin. To solve these problems, a facial muscle control method that controls motors in a correlated manner is required. We designed a facial muscle control method and applied it to EveR-4 H33. We develop the actress robot EveR-4A by applying the EveR-4 H33 to the 24 degrees of freedom upper body and mannequin legs. EveR-4A shows various facial expressions with lip synchronization using our facial muscle control method.


IAS (1) | 2013

Actor Studio: Development of User Friendly Action Editing System for Cultural Performance Robots

Ho Seok Ahn; Dong-Wook Lee; Dongwoon Choi; Duk-Yeon Lee; Manhong Hur; Ho-Gil Lee

Social robots are developed for social interacting between humans and robots by various ways. One of the ways is sharing cultural performances such as singing, dancing, and acting. Various technologies are required for developing cultural robots, and one of the most important things is generating contents. We develop a user friendly action editing system with graphical user interface. It is comfort method to generate long contents of performance according to musical notes or scenarios. Using this method, we generate action data for some performances with real robot system in the theater.


Archive | 2012

Design of 5 D.O.F Robot Hand with an Artificial Skin for an Android Robot

Dongwoon Choi; Dong-Wook Lee; Woong-Hee Shon; Ho-Gil Lee

There have been many researches of robot hands in robot fields and they have been considered one of the most complicated area. There are many reasons why researches of robotic hand are difficult, and these are from complicated structures and functions of hands. There are many types of robotic hands in robotics area, but they can be classified to major two categories. The one is a robotic hand for an operation in industrial area and the other is an experimental hand like human hand. The most of robotic hands in industrial area are 1 D.O.F or 2 D.O.F gripers and they are designed for precise, repetitive operations. In the other area, human like robotic hands, main concerns are how the shape of robotic hands resembles human hands and how the robotic hands can operate like human hands. Most human like robotic hands have 3 ~ 5 fingers like human hands and their shape, size and functions are designed based on human hand. For a long time, major area in researches of robotic hand has been an industrial area, but the importance of human like robotic hands are getting more and more increasing, because the needs of robots will be changed from industrial fields to human friendly environment such as home, office, hospital, school and so on. In brief, the mainstream of robotics will be changed from industrial robots to service robots. One of the important factors for service robots in human friendly environment is their appearance. In general, most of humans are feeling friendly and comfortably to similar appearance like them, so the appearance of service robots should resemble human and their hands should imitate human hands, too. For this reason, there have been many researches for human like robotic hands. Haruhisa Kawasaki developed Gifu hand 2 which has 5 independent fingers. It has 16 D.O.F and 20 joints, so it is one of the most complicated hands. It can operate all joint of each fingers and with attached tactile sensors, delicate grip can be operated. However, its size is big to install to the human size robot (Haruhisa Kawasaki. et al., 2002). F. Lotti used spring joint and tendon to make UBH 3. It has 5 fingers and human like skin and like Gifu hand, each finger has independent joint. The characteristics of this hand is using a spring to its joint and this make its structure simple, but this hand uses too many motors and they are located in other place, so this hand is not good to humanoid robot (F. Lotti. et al., 2005). Kenji KANEKO developed human size multi fingered hand. It has 13 D.O.F complicated fingers and all devices are located in hand but it has 4 fingers and the back of the hand is too big like glove. In this reason, the shape of this hand is a little bit different to human hand, so


international conference on ubiquitous robots and ambient intelligence | 2016

Evaluation of a Korean Lip-sync system for an android robot

Hyun-Jun Hyung; Byeong-Kyu Ahn; Dongwoon Choi; Duk-Yeon Lee; Dong-Wook Lee

Lip-syncing of android robots resembling people is essential to accurately convey their intentions to humans. In this paper, we develop a system of Korean lip-syncing, with the assumption that people can guess a word or phrase from watching a lip-syncing robot without sound. The mouth shape for 10 single vowels was generated based on a Korean single vowels triangle chart. Robots can lip-sync in real time a variety of words and sentences using 10 mouth shapes. We performed experiments recording a mouth robot and an announcer reading text. We conducted a survey to assess humans guessing the representations of a female announcer and of a robot to compare the percent of correct answers in each case. Additionally, we also conducted a survey of robot mouth shapes and lip-sync timing to assess the reaction of subjects on 5-Likert scales. Results indicate that the percent of correct guesses from the mouth shape of the robot was one third of that from the human announcer. Subjects assessed the mouth shape and lip-sync timing of the robot as being somewhat unnatural. We expect that android robot lip-syncing currently uses mouth shapes that are perceived as lying in the uncanny valley when subjects try to interpret them. Thus, we will present a more natural mouth shape, add mouth shapes for diphthongs, and develop a mouth shape that varies with voice volume, improving the rate of lip-sync recognition.

Collaboration


Dive into the Dongwoon Choi's collaboration.

Top Co-Authors

Avatar

Ho Seok Ahn

University of Auckland

View shared research outputs
Top Co-Authors

Avatar

Hyun-Jun Hyung

Korea University of Science and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge