Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Eun-Sol Kim is active.

Publication


Featured researches published by Eun-Sol Kim.


ieee-ras international conference on humanoid robots | 2013

Enhancing human action recognition through spatio-temporal feature learning and semantic rules

Karinne Ramirez-Amaro; Eun-Sol Kim; Jiseob Kim; Byoung-Tak Zhang; Michael Beetz; Gordon Cheng

In this paper, we present a two-stage framework that deal with the problem of automatically extract human activities from videos. First, for action recognition we employ an unsupervised state-of-the-art learning algorithm based on Independent Subspace Analysis (ISA). This learning algorithm extracts spatio-temporal features directly from video data and it is computationally more efficient and robust than other unsupervised methods. Nevertheless, when applying this one-stage state-of-the-art action recognition technique on the observations of human everyday activities, it can only reach an accuracy rate of approximately 25%. Hence, we propose to enhance this process with a second stage, which define a new method to automatically generate semantic rules that can reason about human activities. The obtained semantic rules enhance the human activity recognition by reducing the complexity of the perception system and they allow the possibility of domain change, which can great improve the synthesis of robot behaviors. The proposed method was evaluated under two complex and challenging scenarios: making a pancake and making a sandwich. The difficulty of these scenarios is that they contain finer and more complex activities than the well known data sets (Hollywood2, KTH, etc). The results show benefits of two stages method, the accuracy of action recognition was significantly improved compared to a single-stage method (above 87% compared to human expert). This indicates the improvement of the framework using the reasoning engine for the automatic extraction of human activities from observations, thus, providing a rich mechanism for transferring a wide range of human skills to humanoid robots.


Physiologia Plantarum | 2016

HAWAIIAN SKIRT regulates the quiescent center‐independent meristem activity in Arabidopsis roots

Eun-Sol Kim; Goh Choe; Jose Sebastian; Kook Hui Ryu; Linyong Mao; Zhangjun Fei; Ji-Young Lee

Root apical meristem (RAM) drives post-embryonic root growth by constantly supplying cells through mitosis. It is composed of stem cells and their derivatives, the transit-amplifying (TA) cells. Stem cell organization and its maintenance in the RAM are well characterized, however, their relationships with TA cells remain unclear. SHORTROOT (SHR) is critical for root development. It patterns cell types and promotes the post-embryonic root growth. Defective root growth in the shr has been ascribed to the lack of quiescent center (QC), which maintains the surrounding stem cells. However, our recent investigation indicated that SHR maintains TA cells independently of QC by modulating PHABULOSA (PHB) through miRNA165/6. PHB controls TA cell activity by modulating cytokinin levels and type B Arabidopsis Response Regulator activity, in a dosage-dependent manner. To further understand TA cell regulation, we conducted a shr suppressor screen. With an extensive mutagenesis screen followed by genome sequencing of a pooled F2 population, we discovered two suppressor alleles with mutations in HAWAIIAN SKIRT (HWS). HWS, encoding an F-box protein with kelch domain, is expressed, partly depending on SHR, in the root cap and in the pericycle of the differentiation zone. Interestingly, root growth in the shr hws was more active than the wild-type roots for the first 7 days after germination, without recovering QC. Contrary to shr phb, shr hws did not show a recovery of cytokinin signaling. These indicate that HWS affects QC-independent TA cell activities through a pathway distinctive from PHB.


congress on evolutionary computation | 2011

Mutual information-based evolution of hypernetworks for brain data analysis

Eun-Sol Kim; Jung-Woo Ha; Wi Hoon Jung; Joon Hwan Jang; Jun Soo Kwon; Byoung-Tak Zhang

Cortical analysis becomes increasingly important for brain research and clinical diagnosis. This problem involves a combinatorial search to find the essential modules among a large number of brain regions. Despite several statistical approaches, cortical analysis remains a formidable challenge due to high-dimensionality and sparsity of data. Here we describe an evolutionary method for finding significant modules from cortical data. The method uses a hypernetwork which is encoded as a population of hyperedges, where hyperedges represent building blocks or potential modules. We develop an efficient method for evolving the hypernetwork using mutual information to generate essential hyperedges. We evaluate the method on predicting intelligence quotient (IQ) levels and finding potential significant modules on IQ from brain MRI data consisting of 62 healthy adults with over 80,000 measured points (variables). The experimental results show that our information-theoretic evolutionary hypernetworks improve the classification accuracy by 5∼15%. Moreover, it extracts significant cortical modules that distinguish high IQ from low IQ groups.


KIISE Transactions on Computing Practices | 2017

Multi-Modal Wearable Sensor Integration for Daily Activity Pattern Analysis with Gated Multi-Modal Neural Networks

Kyoung-Woon On; Eun-Sol Kim; Byoung-Tak Zhang

We propose a new machine learning algorithm which analyzes daily activity patterns of users from multi-modal wearable sensor data. The proposed model learns and extracts activity patterns using input from wearable devices in real-time. Inspired by cue integration of humans property, we constructed gated multi-modal neural networks which integrate wearable sensor input data selectively by using gate modules. For the experiments, sensory data were collected by using multiple wearable devices in restaurant situations. As an experimental result, we first show that the proposed model performs well in terms of prediction accuracy. Then, the possibility to construct a knowledge schema automatically by analyzing the activation patterns in the middle layer of our proposed model is explained.


human-agent interaction | 2015

Analyzing Human Behavioral Data to Interact with Restaurant Server Agents

Eun-Sol Kim; Kyoung-Woon On; Byoung-Tak Zhang

In this paper, we consider a problem of analyzing human behavioral data to predict the human cognitive states and generate corresponding actions of sever-agent. Specifically, we aim at predicting human cognitive states during meal time and generating relevant dining services for the human. For this study, we collect behavioral data using 2 kinds of wearable devices, which are an eye tracker and a watch type EDA device, during meal time. We focus on the characteristics of the behavioral data, which are heterogeneous, noisy and temporal, and suggest a novel machine learning algorithm which can analyze the data integrally. Suggested model has hierarchical structure: the bottom layer combines the multi-modal behavioral data based on causal structure of the data and extracts the feature vector. Using the extracted feature vectors, the upper layer predicts the cognitive states based on temporal correlation between feature vectors. Experimental results show that the suggested model can analyze the behavioral data efficiently and predict the human cognitive states correctly.


Journal of KIISE | 2015

Locally Linear Embedding for Face Recognition with Simultaneous Diagonalization

Eun-Sol Kim; Yung-Kyun Noh; Byoung-Tak Zhang

Locally linear embedding (LLE) [1] is a type of manifold algorithms, which preserves inner product value between high-dimensional data when embedding the high-dimensional data to low-dimensional space. LLE closely embeds data points on the same subspace in low-dimensional space, because the data points have significant inner product values. On the other hand, if the data points are located orthogonal to each other, these are separately embedded in low-dimensional space, even though they are in close proximity to each other in high-dimensional space. Meanwhile, it is well known that the facial images of the same person under varying illumination lie in a low-dimensional linear subspace [2]. In this study, we suggest an improved LLE method for face recognition problem. The method maximizes the characteristic of LLE, which embeds the data points totally separately when they are located orthogonal to each other. To accomplish this, all of the subspaces made by each class are forced to locate orthogonally. To make all of the subspaces orthogonal, the simultaneous Diagonalization (SD) technique was applied. From experimental results, the suggested method is shown to dramatically improve the embedding results and classification performance.


I-perception | 2012

P1-20: The Relation of Eye and Hand Movement during Multimodal Recall Memory

Eun-Sol Kim; Jiseob Kim; Byoung-Tak Zhang

Eye and hand movement tracking has been proven to be a successful tool and is widely used to figure out characteristics of human cognition in language or visual processing (Just & Carpenter, 1976 Cognitive Psychology8441–480). Eye movement has proven to be a successful measure to figure out characteristics of human language and visual processing (Rayner, 1998 Psychological Bulletin124(3) 372–422). Recently, mouse tracking was used for social-cognition-like categorization of sex-atypical faces and studying spoken-language processes (Magnuson, 2005 PNAS102(28) 9995–9996; Spivey et al., 2005 PNAS102 10393–10398). Here, we present a framework that uses both eye gaze and hand movement simultaneously for analyzing the relation of them during memory retrieval. We tracked eye and mouse movements when the subject was watching a drama and playing a multimodal memory game (MMG), a cognitive task designed to investigate the recall memory mechanisms in watching video dramas (Zhang, 2009 AAAI 2009 Spring Symposium: Age...


KIISE Transactions on Computing Practices | 2017

Automated Emotional Tagging of Lifelog Data with Wearable Sensors

Kyung-Wha Park; Byoung-Hee Kim; Eun-Sol Kim; Hwiyeol Jo; Byoung-Tak Zhang


international joint conference on artificial intelligence | 2016

DeepSchema: automatic schema acquisition from wearable sensor data in restaurant situations

Eun-Sol Kim; Kyoung-Woon On; Byoung-Tak Zhang


Humanoid Robots, 2013, 13th IEEE-RAS International Conference | 2013

Enhancing Human Action Recognition through Spatio-temporal Feature Learning and Semantic Rules

Karinne Ramirez-Amaro; Eun-Sol Kim; Jiseob Kim; Byoung-Tak Zhang; Michael Beetz; Gordon Cheng

Collaboration


Dive into the Eun-Sol Kim's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jiseob Kim

Seoul National University

View shared research outputs
Top Co-Authors

Avatar

Kyoung-Woon On

Seoul National University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Byoung-Hee Kim

Seoul National University

View shared research outputs
Top Co-Authors

Avatar

Goh Choe

Seoul National University

View shared research outputs
Top Co-Authors

Avatar

Ji-Young Lee

Seoul National University

View shared research outputs
Top Co-Authors

Avatar

Joon Hwan Jang

Seoul National University

View shared research outputs
Top Co-Authors

Avatar

Jun Soo Kwon

Seoul National University

View shared research outputs
Top Co-Authors

Avatar

Jung-Woo Ha

Seoul National University

View shared research outputs
Researchain Logo
Decentralizing Knowledge