Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kyung Hak Hyun is active.

Publication


Featured researches published by Kyung Hak Hyun.


IEEE-ASME Transactions on Mechatronics | 2009

Improved Emotion Recognition With a Novel Speaker-Independent Feature

Eun Ho Kim; Kyung Hak Hyun; Soo Hyun Kim; Yoon Keun Kwak

Emotion recognition is one of the latest challenges in human-robot interaction. This paper describes the realization of emotional interaction for a Thinking Robot, focusing on speech emotion recognition. In general, speaker-independent systems show a lower accuracy rate compared with speaker-dependent systems, as emotional feature values depend on the speaker and their gender. However, speaker-independent systems are required for commercial applications. In this paper, a novel speaker-independent feature, the ratio of a spectral flatness measure to a spectral center (RSS), with a small variation in speakers when constructing a speaker-independent system is proposed. Gender and emotion are hierarchically classified by using the proposed feature (RSS), pitch, energy, and the mel frequency cepstral coefficients. An average recognition rate of 57.2% (plusmn 5.7%) at a 90% confidence interval is achieved with the proposed system in the speaker-independent mode.


robot and human interactive communication | 2007

Emotion Interaction System for a Service Robot

Dong-Soo Kwon; Yoon Keun Kwak; Jong C. Park; Myung Jin Chung; Eun-Sook Jee; Kh Park; Hyoung-Rock Kim; Young-Min Kim; Jong-Chan Park; Eun Ho Kim; Kyung Hak Hyun; Hye-Jin Min; Hui Sung Lee; Jeong Woo Park; Su Hun Jo; S.M. Park; Kyung-Won Lee

This paper introduces an emotion interaction system for a service robot. The purpose of emotion interaction systems in service robots is to make people feel that the robot is not a mere machine, but reliable living assistant in the home. The emotion interaction system is composed of the emotion recognition, generation, and expression systems. A users emotion is recognized by multi-modality, such as voice, dialogue, and touch. The robots emotion is generated according to a psychological theory about emotion: OCC (Ortony, Clore, and Collins) model, which focuses on the users emotional state and the information about environment and the robot itself. The generated emotion is expressed by facial expression, gesture, and the musical sound of the robot. Because the proposed system is composed of all the three components that are necessary for a full emotional interaction cycle, it can be implemented in the real robot system and be tested. Even though the multi- modality in emotion recognition and expression is still in its rudimentary stages, the proposed system is shown to be extremely useful in service robot applications. Furthermore, the proposed framework can be a cornerstone for the design of emotion interaction and generation systems for robots.


robot and human interactive communication | 2005

Robust emotion recognition feature, frequency range of meaningful signal

Eun Ho Kim; Kyung Hak Hyun; Yoon Keun Kwak

Although the literature in emotion recognition from voice emphasizes that the recognition of emotions is generally classified in term of primary (or basic) emotions. However, they fail to explain the rationale for their classification. In addition, for the more exact recognition, more features to classify emotion are needed. But there are only a few features such as energy, pitch, and tempo. Hence, rather than using primary emotions, we classify emotions in emotional groups that have the same emotional state. We also propose a new feature called the frequency range of meaningful signal for emotion recognition from voice. In contrast to other features, this feature is independent of the magnitude of a speech signal and it is robust in a noisy environment. We also confirm the usefulness of this proposed feature through recognition experiments.


robot and human interactive communication | 2007

Speech Emotion Recognition Using Eigen-FFT in Clean and Noisy Environments

Eun Ho Kim; Kyung Hak Hyun; Soo Hyun Kim; Yoon Keun Kwak

The ability to recognize human emotion is one of the basic techniques of human-robot interaction especially on emotional interaction points of view. Hence the purpose of this paper is to describe the realization of the recognition of emotion from a voice. Especially, this paper describes a speech emotion recognition technique using eigen-FFT. In the field of speech emotion recognition, recognition rate is important not only in clean but also in noisy environments. Hence, we constructed voice noise (or babble noise) data and performed recognition tests both in clean and noisy environments using the eigen-FFT. From the experiments, we achieved accuracy of 90.1plusmn7.7% for four emotions at a 95% confidence interval and compared eigen-FFT with LPC, MFCC, and pitch in the clean environment. In the case of the noisy environments, eigen-FFT displayed superior performance over LPC, MFCC, and pitch.


international conference on advanced intelligent mechatronics | 2007

Obstacle negotiation for the rescue robot with variable single-tracked mechanism

Keun Ha Choi; Hae Kwan Jeong; Kyung Hak Hyun; Hyun Do Choi; Yoon Keun Kwak

The purpose of this paper is to provide a practical introduction to a mobile robot developed for rescue operations. The robot has a variable single-tracked mechanism for the driving part, making it inherently able to overcome indoor obstacles such as stairs. In this research, the robot is given the capacity of obstacle negotiation as a hardware attachment in order to realize autonomous navigation that is well matched to an original target, rescue operation. There are three driving modes to choose from, and the robot recognizes the forward environment once and estimates whether or not any obstacles are there. Experimental results show that the robot moves in opposition to several obstacles, reflecting the proposed algorithm ultimately.


robot and human interactive communication | 2007

Emotional Feature Extraction Based On Phoneme Information for Speech Emotion Recognition

Kyung Hak Hyun; Eun Ho Kim; Yoon Keun Kwak

Emotion interaction is an important issue in human robot interaction (HRI). In this paper, we focused on emotion recognition via voice. Generally emotional speech recognition used emotional speech features which are varied as emotion change. However, the emotional speech features changed by not only emotion information but also phoneme information. Therefore we should consider phoneme information when the feature is extracted. So we proposed emotional feature extraction method with considering phoneme information. At first, we evaluated several features such as pitch, energy and formant which are generally used in emotion recognition. Secondly, the features were categorized into emotion reflective features and phoneme dominant features. Finally, emotion reflective features were extracted based on same phoneme information which was classified by phoneme dominant features. This method extracted features which were more sensitive to emotion but less sensitive to phoneme.


international conference on control, automation and systems | 2008

Speech ermotion recognition separately from voiced and unvoiced sound for emotional interaction robot

Eun Ho Kim; Kyung Hak Hyun; Soo Hyun Kim; Yoon Keun Kwak

The purpose of this paper is to describe the realization of speech emotion recognition. Generally, text-independent mode has been utilized for speech emotion recognition, hence previous researches have discounted that emotion features vary according to the text or phonemes, though this can distort the classification performance. To overcome this distortion, a framework of speech emotion recognition is proposed based on segmentation of voiced and unvoiced sound. Voiced and unvoiced sound have different characteristic of emotion features as vocalization between voiced sound and unvoiced sound is much different hence, they should be considered separately. In this paper, voiced and unvoiced sound classification is performed using spectral flatness measures and the spectral center, and a Gaussian mixture model with five mixtures was employed for emotion recognition. To confirm the proposed framework, two systems are compared: the first is emotion classification using whole utterances (ordinary method) and the second uses segments of voiced and unvoiced sound (proposed method). The proposed approach yields higher classification rates compared to previous systems in both cases using each of the emotion features (linear prediction coding (LPC), mel-frequency cepstral coefficients (MFCCs), perceptual linear prediction (PLP) and energy) as well as a combination of these four features.


international conference on advanced intelligent mechatronics | 2007

Emotion interactive robot focus on speaker independently emotion recognition

Eun Ho Kim; Kyung Hak Hyun; Soo Hyun Kim; Yoon Keun Kwak

This paper descries the realization of emotional interaction for thinking robot (T-ROT), especially focus on speech emotion recognition. In the field of speech emotion recognition, most researchers work in a speaker dependent mode. However, the speaker independent system is needed for commercial use. Hence, in this paper, a new feature, ratio of a spectral flatness measure to a spectral center (RSS) with a small variation in speakers in terms of constructing a speaker independent system is proposed. Using the Mel frequency cepstral coefficients and RSS, an average recognition rate of 59.0(plusmn 6.6) % at 90% confidence interval is achieved in a speaker independent and gender dependent mode.


robot and human interactive communication | 2006

Improvement of Emotion Recognition from Voice by Separating of Obstruents

Eun Ho Kim; Kyung Hak Hyun; Yoon Keun Kwak

Previous researchers in the area of emotion recognition have classified emotion from the whole voice. They did not consider that emotion features vary according to the phoneme. Hence, in the present work, we study the characteristics of phonemes in emotion features. Based on the results, we define the obstruents effect, which is a negative effect resulting from increased feature values. We then recognize emotion from the voice by separating obstruents rather than from the whole voice. By separating obstruents, we could improve the emotion recognition rate by about 4.3%


international conference on neural networks and brain | 2005

Optimal Design of Radial Basis Function using Taguchi Method

Eun Ho Kim; Kyung Hak Hyun; Yoon Keun Kwak

Development of the radial basis function networks (RBFNs) can be divided into two stages. First, learning the centres and widths of the radial basis function and next, learning the connection weight. The performance of the RBFN depends entirely on these two learning algorithms. Hence, in this paper, we proposed a new algorithm wherein the centres and widths of the radial basis function in regression problem are selected using the Taguchi method. Some experiments of function estimation are conducted in order to illustrate the performance of the proposed algorithm

Collaboration


Dive into the Kyung Hak Hyun's collaboration.

Researchain Logo
Decentralizing Knowledge