Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jonghwa Kim is active.

Publication


Featured researches published by Jonghwa Kim.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2008

Emotion recognition based on physiological changes in music listening

Jonghwa Kim; Elisabeth André

Little attention has been paid so far to physiological signals for emotion recognition compared to audiovisual emotion channels such as facial expression or speech. This paper investigates the potential of physiological signals as reliable channels for emotion recognition. All essential stages of an automatic recognition system are discussed, from the recording of a physiological data set to a feature-based multiclass classification. In order to collect a physiological data set from multiple subjects over many weeks, we used a musical induction method that spontaneously leads subjects to real emotional states, without any deliberate laboratory setting. Four-channel biosensors were used to measure electromyogram, electrocardiogram, skin conductivity, and respiration changes. A wide range of physiological features from various analysis domains, including time/frequency, entropy, geometric analysis, subband spectra, multiscale entropy, etc., is proposed in order to find the best emotion-relevant features and to correlate them with emotional states. The best features extracted are specified in detail and their effectiveness is proven by classification results. Classification of four musical emotions (positive/high arousal, negative/high arousal, negative/low arousal, and positive/low arousal) is performed by using an extended linear discriminant analysis (pLDA). Furthermore, by exploiting a dichotomic property of the 2D emotion model, we develop a novel scheme of emotion-specific multilevel dichotomous classification (EMDC) and compare its performance with direct multiclass classification using the pLDA. An improved recognition accuracy of 95 percent and 70 percent for subject-dependent and subject-independent classification, respectively, is achieved by using the EMDC scheme.


international conference on multimedia and expo | 2005

From Physiological Signals to Emotions: Implementing and Comparing Selected Methods for Feature Extraction and Classification

Johannes Wagner; Jonghwa Kim; Elisabeth André

Little attention has been paid so far to physiological signals for emotion recognition compared to audio-visual emotion channels, such as facial expressions or speech. In this paper, we discuss the most important stages of a fully implemented emotion recognition system including data analysis and classification. For collecting physiological signals in different affective states, we used a music induction method which elicits natural emotional reactions from the subject. Four-channel biosensors are used to obtain electromyogram, electrocardiogram, skin conductivity and respiration changes. After calculating a sufficient amount of features from the raw signals, several feature selection/reduction methods are tested to extract a new feature set consisting of the most significant features for improving classification performance. Three well-known classifiers, linear discriminant function, k-nearest neighbour and multilayer perceptron, are then used to perform supervised classification


intelligent user interfaces | 2008

EMG-based hand gesture recognition for realtime biosignal interfacing

Jonghwa Kim; Stephan Mastnik; Elisabeth André

In this paper the development of an electromyogram (EMG) based interface for hand gesture recognition is presented. To recognize control signs in the gestures, we used a single channel EMG sensor positioned on the inside of the forearm. In addition to common statistical features such as variance, mean value, and standard deviation, we also calculated features from the time and frequency domain including Fourier variance, region length, zerocrosses, occurrences, etc. For realizing real-time classification assuring acceptable recognition accuracy, we combined two simple linear classifiers (k-NN and Bayes) in decision level fusion. Overall, a recognition accuracy of 94% was achieved by using the combined classifier with a selected feature set. The performance of the interfacing system was evaluated through 40 test sessions with 30 subjects using an RC Car. Instead of using a remote control unit, the car was controlled by four different gestures performed with one hand. In addition, we conducted a study to investigate the controllability and ease of use of the interface and the employed gestures.


Archive | 2007

Bimodal Emotion Recognition using Speech and Physiological Changes

Jonghwa Kim

With exponentially evolving technology it is no exaggeration to say that any interface for human-robot interaction (HRI) that disregards human affective states and fails to pertinently react to the states can never inspire a user’s confidence, but they perceive it as cold, untrustworthy, and socially inept. Indeed, there is evidence that HRI is more likely to be accepted by the user if it is sensitive towards the user’s affective states, as expression and understanding of emotions facilitate to complete the mutual sympathy in human communication. To approach the affective human-robot interface, one of the most important prerequisites is a reliable emotion recognition system which guarantees acceptable recognition accuracy, robustness against any artifacts, and adaptability to practical applications. Emotion recognition is an extremely challenging task in several respects. One of the main difficulties is that it is very hard to uniquely correlate signal patterns with a certain emotional state because even it is difficult to define what emotion means in a precise way. Moreover, it is the fact that emotion-relevant signal patterns may widely differ from person to person and from situation to situation. Gathering “ground-truth” dataset is also problematical to build a generalized emotion recognition system. Therefore, a number of assumptions are generally required for engineering approach to emotion recognition. Most research on emotion recognition so far has focused on the analysis of a single modality, such as speech and facial expression (see (Cowie et al., 2001) for a comprehensive overview). Recently some works on emotion recognition by combining multiple modalities are reported, mostly by fusing features extracted from audiovisual modalities such as facial expression and speech. We humans use several modalities jointly to interpret emotional states in human communication, since emotion affects almost all modes, audiovisual (facial expression, voice, gesture, posture, etc.), physiological (respiration, skin temperature etc.), and contextual (goal, preference, environment, social situation, etc.) states. Hence, one can expect higher recognition rates through the integration of multiple modalities for emotion recognition. On the other hand, however, more complex classification and fusion problems arise. In this chapter, we concentrate on the integration of speech signals and physiological measures (biosignals) for emotion recognition based on a short-term observation. Several advantages can be expected when combining biosensor feedback with affective speech. First


IEEE Transactions on Affective Computing | 2011

Exploring Fusion Methods for Multimodal Emotion Recognition with Missing Data

Johannes Wagner; Elisabeth André; Florian Lingenfelser; Jonghwa Kim

The study at hand aims at the development of a multimodal, ensemble-based system for emotion recognition. Special attention is given to a problem often neglected: missing data in one or more modalities. In offline evaluation the issue can be easily solved by excluding those parts of the corpus where one or more channels are corrupted or not suitable for evaluation. In real applications, however, we cannot neglect the challenge of missing data and have to find adequate ways to handle it. To address this, we do not expect examined data to be completely available at all time in our experiments. The presented system solves the problem at the multimodal fusion stage, so various ensemble techniques-covering established ones as well as rather novel emotion specific approaches-will be explained and enriched with strategies on how to compensate for temporarily unavailable modalities. We will compare and discuss advantages and drawbacks of fusion categories and extensive evaluation of mentioned techniques is carried out on the CALLAS Expressivity Corpus, featuring facial, vocal, and gestural modalities.


perception and interactive technologies | 2006

Emotion recognition using physiological and speech signal in short-term observation

Jonghwa Kim; Elisabeth André

Recently, there has been a significant amount of work on the recognition of emotions from visual, verbal or physiological information. Most approaches to emotion recognition so far concentrate, however, on a single modality while work on the integration of multimodal information, in particular on fusing physiological signals with verbal or visual data, is scarce. In this paper, we analyze various methods for fusing physiological and vocal information and compare the recognition results of the bimodal recognition approach with the results of the unimodal approach.


ieee international conference on automatic face & gesture recognition | 2008

Bi-channel sensor fusion for automatic sign language recognition

Jonghwa Kim; Johannes Wagner; Matthias Rehm; Elisabeth André

In this paper, we investigate the mutual-complementary functionality of accelerometer (ACC) and electromyogram (EMG) for recognizing seven word-level sign vocabularies in German sign language (GSL). Results are discussed for the single channels and for feature-level fusion for the bichannel sensor data. For the subject-dependent condition, this fusion method proves to be effective. Most relevant features for all subjects are extracted and their universal effectiveness is proven with a high average accuracy for the single subjects. Additionally, results are given for the subject-independent condition, where subjective differences do not allow for high recognition rates. Finally we discuss a problem of feature-level fusion caused by high disparity between accuracies of each single channel classification.


Archive | 2011

Physiological Signals and Their Use in Augmenting Emotion Recognition for Human-Machine Interaction

R. Benjamin Knapp; Jonghwa Kim; Elisabeth André

In this chapter we introduce the concept of using physiological signals as an indicator of emotional state. We review the ambulatory techniques for physiological measurement of the autonomic and central nervous system as they might be used in human–machine interaction. A brief history of using human physiology in HCI leads to a discussion of the state of the art of multimodal pattern recognition of physiological signals. The overarching question of whether results obtained in a laboratory can be applied to ecological HCI remains unanswered.


systems man and cybernetics | 2013

Transsituational Individual-Specific Biopsychological Classification of Emotions

Steffen Walter; Jonghwa Kim; David Hrabal; Stephen Crawcour; Henrik Kessler; Harald C. Traue

The goal of automatic biopsychological emotion recognition of companion technologies is to ensure reliable and valid classification rates. In this paper, emotional states were induced via a Wizard-of-Oz mental trainer scenario, which is based on the valence-arousal-dominance model. In most experiments, classification algorithms are tested via leave-out cross-validation of one situation. These studies often show very high classification rates, which are comparable with those in our experiment (92.6%). However, in order to guarantee robust emotion recognition based on biopsychological data, measurements have to be taken across several situations with the goal of selecting stable features for individual emotional states. For this purpose, our mental trainer experiment was conducted twice for each subject with a 10-min break between the two rounds. It is shown that there are robust psychobiological features that can be used for classification (70.1%) in both rounds. However, these are not the same as those that were found via feature selection performed only on the first round (classification: 53.0%).


Emotion-Oriented Systems | 2011

Multimodal Emotion Recognition from Low-Level Cues

Maja Pantic; George Caridakis; Elisabeth André; Jonghwa Kim; Kostas Karpouzis; Stefanos D. Kollias

Emotional intelligence is an indispensable facet of human intelligence and one of the most important factors for a successful social life. Endowing machines with this kind of intelligence towards affective human–machine interaction, however, is not an easy task. It becomes more complex with the fact that human beings use several modalities jointly to interpret affective states, since emotion affects almost all modes – audio-visual (facial expression, voice, gesture, posture, etc.), physiological (respiration, skin temperature, etc.), and contextual (goal, preference, environment, social situation, etc.) states. Compared to common unimodal approaches, many specific problems arise from the case of multimodal emotion recognition, especially concerning fusion architecture of the multimodal information. In this chapter, we firstly give a short review for the problems and then present research results of various multimodal architectures based on combined analysis of facial expression, speech, and physiological signals. Lastly we introduce designing of an adaptive neural network classifier that is capable of deciding the necessity of adaptation process in respect of environmental changes.

Collaboration


Dive into the Jonghwa Kim's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Thurid Vogt

University of Augsburg

View shared research outputs
Top Co-Authors

Avatar

Frank Jung

University of Augsburg

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge