Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Woo Hyun Kim is active.

Publication


Featured researches published by Woo Hyun Kim.


human-robot interaction | 2013

LMA based emotional motion representation using RGB-D camera

Woo Hyun Kim; Jeong Woo Park; Won Hyong Lee; Hui Sung Lee; Myung Jin Chung

In this paper, emotional motion representation is proposed for Human Robot Interaction: HRI. The proposed representation is based on “Laban Movement Analysis: LMA” and trajectories of 3-dimensional whole body joint positions using an RGB-D camera such as a “Microsoft Kinect”. The experimental results show that the proposed method distinguishes two types of human emotional motion well.


robotics and biomimetics | 2009

Lifelike facial expression of mascot-type robot based on emotional boundaries

Jeong Woo Park; Woo Hyun Kim; Won Hyong Lee; Won Hwa Kim; Myung Jin Chung

Nowadays, many robots have evolved to imitate human social skills such that sociable interaction with humans is possible. Socially interactive robots require abilities different from that of conventional robots. For instance, human-robot interactions are accompanied by emotion similar to human-human interactions, Robot emotional expression is thus very important for humans. This is particularly true for facial expressions, which play an important role in communication amongst other non-verbal forms. In this paper, we introduce a method of creating lifelike facial expressions in robots using variation of affect values which consist of the robots emotions based on emotional boundaries. The proposed method was examined by experiments of two facial robot simulators.


human-robot interaction | 2013

Interactive facial robot system on a smart device: enhanced touch screen input recognition and robot's reactive facial expression

Won Hyong Lee; Jeong Woo Park; Woo Hyun Kim; Myung Jin Chung

This paper suggests an interactive facial robot system on a smart device which has a touch screen and a built-in microphone. The recognition process for touch inputs is enhanced by analyzing input patterns and a built-in microphone. Recognized results from the input procedure are converted to emotional states of the system, and then the emotional states are reactively expressed at a facial simulator displayed on the devices touch screen. Therefore, the proposed facial system can be implemented in one smart device at which input sensors and a visual output are on a same display component.


Human-centric Computing and Information Sciences | 2010

Artificial Emotion Generation Based on Personality, Mood, and Emotion for Life-Like Facial Expressions of Robots

Jeong Woo Park; Woo Hyun Kim; Won Hyong Lee; Myung Jin Chung

We can’t overemphasize the importance of robot’s emotional expressions as robots step into human’s daily lives. So, the believable and socially acceptable emotional expressions of robots are essential. For such human-like emotional expression, we have proposed an emotion generation model considering personality, mood and history of robot’s emotion. The personality module is based on the Big Five Model (OCEAN Model, Five Factor Model); the mood module has one dimension such as good or bad, and the emotion module uses the six basic emotions as defined by Ekman. Unlike most of the previous studies, the proposed emotion generation model was integrated with the Linear Dynamic Affect Expression Model (LDAEM), which is an emotional expression model that can make facial expressions similar to those of humans. So, both the emotional state and expression of robots can be changed dynamically.


computational intelligence in robotics and automation | 2009

Stochastic approach on a simplified OCC model for uncertainty and believability

Won Hwa Kim; Jeong Woo Park; Won Hyong Lee; Woo Hyun Kim; Myung Jin Chung

As robots step into the humans daily lives, interaction and communication between human and robot is becoming essential. For this social interaction with humans, we propose an emotion generation model considering simplicity, believability and uncertainty. First, OCC model is simplified and then stochastic approach on emotion decision algorithm for believability and uncertainty is applied. The proposed model is implemented on a 3D robot expression simulator that can express emotions through its facial expression, gesture, led and so on. A demo of the model is provided as a result.


robotics and biomimetics | 2011

How to completely use the PAD space for socially interactive robots

Jeong Woo Park; Woo Hyun Kim; Won Hyong Lee; Ju Chang Kim; Myung Jin Chung

Human-robot interaction (HRI) is becoming more complex and difficult due to the growing number of capabilities demonstrated by socially interactive robots. From this perspective, it is necessary for us to have simple and general methods that enable robots to interact with human beings at an emotional level. Therefore, this paper suggests a method that can efficiently use the PAD emotion space to generate artificial emotions based on a categorization of emotions using cluster analysis. After clustering, emotional appraisals can be extracted according to PAD input vectors using results of the categorization then we can generate blended emotions and calculate an intensity of each emotion. We evaluate the categorization results by using the Davies-Bouldin index and the patterns of emotions that are generated through categorization results. Furthermore, the generated emotions are expressed by a physical mascot-type head robot using our emotion expression model. We will also show how differently emotions are generated according to whether optimization is used or not.


advanced robotics and its social impacts | 2010

Hierarchical database based on feature parameters for various multimodal expression generation of robot

Woo Hyun Kim; Jeong Woo Park; Won Hyong Lee; Myung Jin Chung

In this paper, we propose reliable, diverse, expansible, and usable expression generation system. Proposed system is to generate synchronized multimodal expression automatically based on hierarchical database and context information such as robots emotional state and sentence robot is trying to say. Compared to prior system, our system based on feature parameters is much easier to generate new expression and modify expressions according to the robots emotion. In our system, there are sentence module, emotion module, and expression module. We focus on only robots expression module. In order to generate expressions automatically, we use outputs of the sentence and emotion modules. We have classified robot sentence under 13 types and robot emotion under 3 types. About all 39 categories and body language, we have constructed behavior database with 128 expressions. For the reliability and the variety of expressions, a professional actors expression data have been obtained and we requested a cartoonist to draw sketch of robots expressions corresponding to defined categories.


robotics and biomimetics | 2009

Synchronized multimodal expression generation using editing toolkit for a human-friendly robot

Woo Hyun Kim; Jeong Woo Park; Won Hyong Lee; Won Hwa Kim; Myung Jin Chung

Attempts to put robots to practical use have been increased, as robots become more human-friendly. In the human-robot interaction field, main issues are how variously the robot can express its emotion and how much the expression is socially acceptable. This paper proposed the editing toolkit which allows us to simulate a 3D model robot in order to express robots emotions and intentions for human-robot interaction and robot services. Using the editing toolkit, we have generated multimodal expressions and formulated the method to combine a few of the primitive expressions. The robot, which we used for simulation, has three modalities such as a facial expression, a neck motion and a gesture with two arms. The expressions of each modality were used for generating multimodal expressions, synchronizing with the time information obtained from a professional actor. Consequently, for three emotions and thirteen intentions of the robot, we have generated primitive expression database set and synchronized multimodal expression using editing toolkit.


ieee/sice international symposium on system integration | 2012

Automated robot speech gesture generation system based on dialog sentence punctuation mark extraction

Jaewoo Kim; Woo Hyun Kim; Won Hyong Lee; Ju-Hwan Seo; Myung Jin Chung; Dong-Soo Kwon

This paper proposed automated robot speech gesture generation system for service/entertainment robots. This system can automatically generate robot Beat gestures accompanied by speech interaction situation with humans. Beat gestures have no specific semantic meanings to communicate, but it is commonly believed that they are an essential factor of natural communication. We extracted basic gesture patterns by analyzing human speech videos and built a correlation model between gesture pattern and punctuation mark of speech sentence. This model can select a gesture pattern sequence to make a robot gesture expression motion for arbitrary input speech sentence, and synchronize it with vocal wave file from TTS(Text-To-Speech) engine.


robotics and biomimetics | 2010

Robot's emotion generation model for transition and diversity using energy, entropy, and homeostasis concepts

Won Hyong Lee; Jeong Woo Park; Woo Hyun Kim; Ju Chang Kim; Myung Jin Chung

This study is for describing a robots emotion generation and transition by introducing concepts of energy, entropy, and homeostasis.

Collaboration


Dive into the Woo Hyun Kim's collaboration.

Top Co-Authors

Avatar

Won Hwa Kim

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Ji Hoon Joung

Electronics and Telecommunications Research Institute

View shared research outputs
Researchain Logo
Decentralizing Knowledge