Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Bo-Kyeong Kim is active.

Publication


Featured researches published by Bo-Kyeong Kim.


Journal on Multimodal User Interfaces | 2016

Hierarchical committee of deep convolutional neural networks for robust facial expression recognition

Bo-Kyeong Kim; Jihyeon Roh; Suh-Yeon Dong; Soo-Young Lee

This paper describes our approach towards robust facial expression recognition (FER) for the third Emotion Recognition in the Wild (EmotiW2015) challenge. We train multiple deep convolutional neural networks (deep CNNs) as committee members and combine their decisions. To improve this committee of deep CNNs, we present two strategies: (1) in order to obtain diverse decisions from deep CNNs, we vary network architecture, input normalization, and random weight initialization in training these deep models, and (2) in order to form a better committee in structural and decisional aspects, we construct a hierarchical architecture of the committee with exponentially-weighted decision fusion. In solving a seven-class problem of static FER in the wild for the EmotiW2015, we achieve a test accuracy of 61.6xa0%. Moreover, on other public FER databases, our hierarchical committee of deep CNNs yields superior performance, outperforming or competing with state-of-the-art results for these databases.


high performance distributed computing | 2012

Locality-aware dynamic VM reconfiguration on MapReduce clouds

Jongse Park; Daewoo Lee; Bo-Kyeong Kim; Jaehyuk Huh; Seungryoul Maeng

Cloud computing based on system virtualization, has been expanding its services to distributed data-intensive platforms such as MapReduce and Hadoop. Such a distributed platform on clouds runs in a virtual cluster consisting of a number of virtual machines. In the virtual cluster, demands on computing resources for each node may fluctuate, due to data locality and task behavior. However, current cloud services use a static cluster configuration, fixing or manually adjusting the computing capability of each virtual machine (VM). The fixed homogeneous VM configuration may not adapt to changing resource demands in individual nodes.n In this paper, we propose a dynamic VM reconfiguration technique for data-intensive computing on clouds, called Dynamic Resource Reconfiguration (DRR). DRR can adjust the computing capability of individual VMs to maximize the utilization of resources. Among several factors causing resource imbalance in the Hadoop platforms, this paper focuses on data locality. Although assigning tasks on the nodes containing their input data can improve the overall performance of a job significantly, the fixed computing capability of each node may not allow such locality-aware scheduling. DRR dynamically increases or decreases the computing capability of each node to enhance locality-aware task scheduling. We evaluate the potential performance improvement of DRR on a 100-node cluster, and its detailed behavior on a small scale cluster with constrained network bandwidth. On the 100-node cluster, DRR can improve the throughput of Hadoop jobs by 15% on average, and 41% on the private cluster with the constrained network connection.


international conference on multimodal interfaces | 2015

Hierarchical Committee of Deep CNNs with Exponentially-Weighted Decision Fusion for Static Facial Expression Recognition

Bo-Kyeong Kim; Hwaran Lee; Jihyeon Roh; Soo-Young Lee

We present a pattern recognition framework to improve committee machines of deep convolutional neural networks (deep CNNs) and its application to static facial expression recognition in the wild (SFEW). In order to generate enough diversity of decisions, we trained multiple deep CNNs by varying network architectures, input normalization, and weight initialization as well as by adopting several learning strategies to use large external databases. Moreover, with these deep models, we formed hierarchical committees using the validation-accuracy-based exponentially-weighted average (VA-Expo-WA) rule. Through extensive experiments, the great strengths of our committee machines were demonstrated in both structural and decisional ways. On the SFEW2.0 dataset released for the 3rd Emotion Recognition in the Wild (EmotiW) sub-challenge, a test accuracy of 57.3% was obtained from the best single deep CNN, while the single-level committees yielded 58.3% and 60.5% with the simple average rule and with the VA-Expo-WA rule, respectively. Our final submission based on the 3-level hierarchy using the VA-Expo-WA achieved 61.6%, significantly higher than the SFEW baseline of 39.1%.


computer vision and pattern recognition | 2016

Fusing Aligned and Non-aligned Face Information for Automatic Affect Recognition in the Wild: A Deep Learning Approach

Bo-Kyeong Kim; Suh-Yeon Dong; Jihyeon Roh; Geonmin Kim; Soo-Young Lee

Face alignment can fail in real-world conditions, negatively impacting the performance of automatic facial expression recognition (FER) systems. In this study, we assume a realistic situation including non-alignable faces due to failures in facial landmark detection. Our proposed approach fuses information about non-aligned and aligned facial states, in order to boost FER accuracy and efficiency. Six experimental scenarios using discriminative deep convolutional neural networks (DCNs) are compared, and causes for performance differences are identified. To handle non-alignable faces better, we further introduce DCNs that learn a mapping from non-aligned facial states to aligned ones, alignment-mapping networks (AMNs). We show that AMNs represent geometric transformations of face alignment, providing features beneficial for FER. Our automatic system based on ensembles of the discriminative DCNs and the AMNs achieves impressive results on a challenging database for FER in the wild.


Neurocomputing | 2015

Hierarchical feature extraction by multi-layer non-negative matrix factorization network for classification task

Hyun Ah Song; Bo-Kyeong Kim; Thanh Luong Xuan; Soo-Young Lee

In this paper, we propose multi-layer non-negative matrix factorization (NMF) network for classification task, which provides intuitively understandable hierarchical feature learning process. The layer-by-layer learning strategy was adopted through stacked NMF layers, which enforced non-negativity of both features and their coefficients. With the non-negativity constraint, the learning process revealed latent feature hierarchies in the complex data in intuitively understandable manner. The multi-layer NMF networks was investigated for classification task by studying various network architectures and nonlinear functions. The proposed multilayer NMF network was applied to document classification task, and demonstrated that our proposed multi-layer NMF network resulted in much better classification performance compared to single-layered network, even with the small number of features. Also, through intuitive learning process, the underlying structure of feature hierarchies was revealed for the complex document data.


international symposium on neural networks | 2014

Color image processing based on Nonnegative Matrix Factorization with Convolutional Neural Network

Thanh Xuan Luong; Bo-Kyeong Kim; Soo-Young Lee

Although Nonnegative Matrix Factorization (NMF) has been widely known as an effective feature extraction method, which provides part-based representation and good reconstruction, there were relatively few researches using NMF for color image processing. Particularly, many studies are now using Convolutional Neural Network (CNN) in combined with Auto-Encoder (AE) or Restricted Boltzmann Machine (RBM) for learning features of color images. In this paper, we explore the ability of NMF to handle color images. Especially, a new method using NMF to learn features in CNN is proposed. In our experiments conducted on CIF ARIO, NMF shows the feasibility for reconstruction and classification of color images. Furthermore, unlike edge- or curve- shaped features learned by AE and RBM in CNN, our method provides dot- shaped features. These new types of features could be considered as basic building blocks in the lowest level of constructing images. Our results demonstrate that NMF is capable of being a supporting tool for CNN in learning features.


Social Neuroscience | 2016

Implicit agreeing/disagreeing intention while reading self-relevant sentences: A human fMRI study

Suh-Yeon Dong; Bo-Kyeong Kim; Soo-Young Lee

The true intentions of humans are sometimes difficult to ascertain exclusively from explicit expressions, such as speech, gestures, or facial expressions. In this experiment, functional magnetic resonance imaging (fMRI) was used to investigate implicit intentions that were generated while a subject was reading self-relevant sentences. Short sentences, which were presented visually, consisted of self-relevant statements and a substantive verb, which indicated sentence polarity as either affirmative or negative. Each sentence was divided into the contents and the sentence ending, and the subjects were asked to respond with either agreement or disagreement after the complete sentence was presented. The overall group analysis suggested that the intention of the sentence response was found even before the reading of the complete sentences. Increased neural activation was found in the left medial prefrontal cortex (MPFC) during feelings of agreement compared to feelings of disagreement during self-relevant decision-making. In addition, according to the sentence ending, the decision of a response activated the frontopolar cortex (FPC) in the switching condition. These findings indicated that the implicit intentions of responses to the given statements were internally generated before an explicit response occurred, and, hence, intentions can be used to predict a subject’s future answer.


IEEE Transactions on Systems, Man, and Cybernetics | 2016

EEG-Based Classification of Implicit Intention During Self-Relevant Sentence Reading

Suh-Yeon Dong; Bo-Kyeong Kim; Soo-Young Lee

From electroencephalography (EEG) data during self-relevant sentence reading, we were able to discriminate two implicit intentions: (1) “agreement” and (2) “disagreement” to the read sentence. To improve the classification accuracy, discriminant features were selected based on Fisher score among EEG frequency bands and electrodes. Especially, the time-frequency representation with Morlet wavelet transforms showed clear differences in gamma, beta, and alpha band powers at frontocentral area, and theta band power at centroparietal area. The best classification accuracy of 75.5% was obtained by a support vector machine classifier with the gamma band features at frontocentral area. This result may enable a new intelligent user-interface which understands users implicit intention, i.e., unexpressed or hidden intention.


human-agent interaction | 2015

A Preliminary Study on Human Trust Measurements by EEG for Human-Machine Interactions

Suh-Yeon Dong; Bo-Kyeong Kim; Kyeongho Lee; Soo-Young Lee

We propose a novel experiment paradigm to measure human trust on machine during a collaborative and egoistic theory-of-mind game. To show a different level of human trust on machine partners, we control the technical capability and humanlike cues of the autonomous agent in the cognitive experiments while recording participants electroencephalography (EEG). The measured human trust values at various situations will be used to develop a dynamic trust model for efficient human-machine systems.


international conference on neural information processing | 2013

Spectral Feature Extraction Using dNMF for Emotion Recognition in Vowel Sounds

Bo-Kyeong Kim; Soo-Young Lee

Recognizing emotional state from human voice is one of the important issues on speech signal processing. In this paper, we use the dNMF algorithm to find emotion-related spectral components in word speech. Each word consists of only vowels to remove language-dependent emotional factors. The dNMF algorithm with the additional Fisher criterion on the cost function of conventional NMF was designed to increase class-related discriminating power. Our experiment to recognize happiness, sadness, anger, and boredom in vowel sounds shows that more informative harmonic structures are computed by dNMF than NMF. Furthermore, dNMF features result in better recognition rates than NMF features for speaker-independent emotion recognition.

Collaboration


Dive into the Bo-Kyeong Kim's collaboration.

Researchain Logo
Decentralizing Knowledge