Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Suh-Yeon Dong is active.

Publication


Featured researches published by Suh-Yeon Dong.


computer vision and pattern recognition | 2016

Fusing Aligned and Non-aligned Face Information for Automatic Affect Recognition in the Wild: A Deep Learning Approach

Bo-Kyeong Kim; Suh-Yeon Dong; Jihyeon Roh; Geonmin Kim; Soo-Young Lee

Face alignment can fail in real-world conditions, negatively impacting the performance of automatic facial expression recognition (FER) systems. In this study, we assume a realistic situation including non-alignable faces due to failures in facial landmark detection. Our proposed approach fuses information about non-aligned and aligned facial states, in order to boost FER accuracy and efficiency. Six experimental scenarios using discriminative deep convolutional neural networks (DCNs) are compared, and causes for performance differences are identified. To handle non-alignable faces better, we further introduce DCNs that learn a mapping from non-aligned facial states to aligned ones, alignment-mapping networks (AMNs). We show that AMNs represent geometric transformations of face alignment, providing features beneficial for FER. Our automatic system based on ensembles of the discriminative DCNs and the AMNs achieves impressive results on a challenging database for FER in the wild.


international symposium on neural networks | 2012

Understanding human implicit intention based on frontal electroencephalography (EEG)

Suh-Yeon Dong; Soo-Young Lee

The objective of human computer interface (HCI) research is to make machine that understands human intention with high accuracy. In the future, machine will be able to understand human intention and communicate with human even though there is no explicit expression such as speech, gesture, etc. In this study, a new experiment design is proposed to understand human intention by using electroencephalography (EEG). Two types of stimulus are given to the subjects; affirmative and negative sentences. Each sentence was separated by two blocks which are contents and sentence end. Subjects were asked to make a decision of agreement or disagreement after sentence end is shown. Based on the EEG analysis, intention to the sentences was found while seeing contents, i.e., before giving an explicit answer. It is shown that decision making is done before the end of the sentence, and answering “Yes” or “No” is determined only up to the sentence end. This study shows the relationship of implicit intention and average event-related potentials (ERPs) at the frontal sites and how to predict implicit intention based on the central frontal ERPs using support vector machine (SVM).


Social Neuroscience | 2016

Implicit agreeing/disagreeing intention while reading self-relevant sentences: A human fMRI study

Suh-Yeon Dong; Bo-Kyeong Kim; Soo-Young Lee

The true intentions of humans are sometimes difficult to ascertain exclusively from explicit expressions, such as speech, gestures, or facial expressions. In this experiment, functional magnetic resonance imaging (fMRI) was used to investigate implicit intentions that were generated while a subject was reading self-relevant sentences. Short sentences, which were presented visually, consisted of self-relevant statements and a substantive verb, which indicated sentence polarity as either affirmative or negative. Each sentence was divided into the contents and the sentence ending, and the subjects were asked to respond with either agreement or disagreement after the complete sentence was presented. The overall group analysis suggested that the intention of the sentence response was found even before the reading of the complete sentences. Increased neural activation was found in the left medial prefrontal cortex (MPFC) during feelings of agreement compared to feelings of disagreement during self-relevant decision-making. In addition, according to the sentence ending, the decision of a response activated the frontopolar cortex (FPC) in the switching condition. These findings indicated that the implicit intentions of responses to the given statements were internally generated before an explicit response occurred, and, hence, intentions can be used to predict a subject’s future answer.


human-agent interaction | 2015

A Preliminary Study on Human Trust Measurements by EEG for Human-Machine Interactions

Suh-Yeon Dong; Bo-Kyeong Kim; Kyeongho Lee; Soo-Young Lee

We propose a novel experiment paradigm to measure human trust on machine during a collaborative and egoistic theory-of-mind game. To show a different level of human trust on machine partners, we control the technical capability and humanlike cues of the autonomous agent in the cognitive experiments while recording participants electroencephalography (EEG). The measured human trust values at various situations will be used to develop a dynamic trust model for efficient human-machine systems.


international conference on neural information processing | 2013

Decoding and Predicting Implicit Agreeing/Disagreeing IntentionBased on Electroencephalography EEG

Suh-Yeon Dong; Bo-Kyeong Kim; Soo-Young Lee

A new experiment design is proposed to understand human implicit intention by using electroencephalography EEG. EEG data is recorded using 32-channel electrodes while seeing various sentences which contain self-relevant contents. Subjects are asked to make a decision of agreement or disagreement just after sentence ending is shown. Based on their answer, support vector machine is used for pattern classification with radial basis function kernel. The classification result shows the intention to the sentences can be classified with 67.89% of maximum average accuracy. The spatial relationship of average classification accuracy shows right frontal areas have relatively high classification accuracy. Our findings indicate that covert representation of agreement or disagreement intention can be found in the EEG band power and it is also possible to predict subjects implicit intention even before making explicit expression.


Journal on Multimodal User Interfaces | 2016

Hierarchical committee of deep convolutional neural networks for robust facial expression recognition

Bo-Kyeong Kim; Jihyeon Roh; Suh-Yeon Dong; Soo-Young Lee


IEEE Transactions on Systems, Man, and Cybernetics | 2016

EEG-Based Classification of Implicit Intention During Self-Relevant Sentence Reading

Suh-Yeon Dong; Bo-Kyeong Kim; Soo-Young Lee


뇌와 인공지능 심포지엄 | 2012

Understanding Human Implicit Intention based on Electroencephalography (EEG) (only abstract)

Soo-Young Lee; Suh-Yeon Dong


The 12th China-Japan-Korea Joint Workshop on Neurobiology and Neuroinformatics (NBNI2012) | 2012

Medial Prefrontal Cortex and Self-relevance: An fMRI Study (only abstract)

Suh-Yeon Dong; Soo-Young Lee


The 12th China-Japan-Korea Joint Workshop on Neurobiology and Neuroinformatics (NBNI2012) | 2012

Extraction of Geometric Curves Using Oscillatory Neural Network (only abstract)

Wonil Chang; Soo-Young Lee; Suh-Yeon Dong; Sang-Hoon Oh

Collaboration


Dive into the Suh-Yeon Dong's collaboration.

Top Co-Authors

Avatar

Sang-Hoon Oh

Pohang University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Wonil Chang

Electronics and Telecommunications Research Institute

View shared research outputs
Researchain Logo
Decentralizing Knowledge