Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hyun Kwon is active.

Publication


Featured researches published by Hyun Kwon.


Molecular Crystals and Liquid Crystals | 1997

Alternately Stacked Langmuir-Blodgett Film of Phospholipid and ZnO as an Olfactory Sensing Membrane

Sa Choi; Yj Lee; Hyun Kwon; Yong-Keun Chang; Jong-Duk Kim

Abstract Langmuir-Blodgett films of Phospholipid and ZnO were investigated as a sensing membrane to enhance the sensitivity of an olfactory sensor. The membranes were fabricated by alternate stack of dipalmitoylphophatidic acid (DPPA) and ZnO. Compared with the DPPA LB film, the alternately stacked film system showed higher sensitivity and faster response to gases. The chemical vapors used in this study were methanol, ethanol, and acetone.


computer and communications security | 2018

POSTER: Zero-Day Evasion Attack Analysis on Race between Attack and Defense

Hyun Kwon; Hyunsoo Yoon; Daeseon Choi

Deep neural networks (DNNs) exhibit excellent performance in machine learning tasks such as image recognition, pattern recognition, speech recognition, and intrusion detection. However, the usage of adversarial examples, which are intentionally corrupted by noise, can lead to misclassification. As adversarial examples are serious threats to DNNs, both adversarial attacks and methods of defending against adversarial examples have been continuously studied. Zero-day adversarial examples are created with new test data and are unknown to the classifier; hence, they represent a more significant threat to DNNs. To the best of our knowledge, there are no analytical studies in the literature of zero-day adversarial examples with a focus on attack and defense methods through experiments using several scenarios. Therefore, in this study, zero-day adversarial examples are practically analyzed with an emphasis on attack and defense methods through experiments using various scenarios composed of a fixed target model and an adaptive target model. The Carlini method was used for a state-of-the-art attack, while an adversarial training method was used as a typical defense method. We used the MNIST dataset and analyzed success rates of zero-day adversarial examples, average distortions, and recognition of original samples through several scenarios of fixed and adaptive target models. Experimental results demonstrate that changing the parameters of the target model in real time leads to resistance to adversarial examples in both the fixed and adaptive target models.


Computers & Security | 2018

Friend-safe evasion attack: An adversarial example that is correctly recognized by a friendly classifier

Hyun Kwon; Yongchul Kim; Ki-Woong Park; Hyunsoo Yoon; Daeseon Choi

Abstract Deep neural networks (DNNs) have been applied in several useful services, such as image recognition, intrusion detection, and pattern analysis of machine learning tasks. Recently proposed adversarial examples-slightly modified data that lead to incorrect classification-are a severe threat to the security of DNNs. In some situations, however, an adversarial example might be useful, such as when deceiving an enemy classifier on the battlefield. In such a scenario, it is necessary that a friendly classifier not be deceived. In this paper, we propose a friend-safe adversarial example, meaning that the friendly machine can classify the adversarial example correctly. To produce such examples, a transformation is carried out to minimize the probability of incorrect classification by the friend and that of correct classification by the adversary. We suggest two configurations for the scheme: targeted and untargeted class attacks. We performed experiments with this scheme using the MNIST and CIFAR10 datasets. Our proposed method shows a 100% attack success rate and 100% friend accuracy with only a small distortion: 2.18 and 1.54 for the two respective MNIST configurations, and 49.02 and 27.61 for the two respective CIFAR10 configurations. Additionally, we propose a new covert channel scheme and a mixed battlefield application for consideration in further applications.


international conference on information security and cryptology | 2017

Friend-Safe Adversarial Examples in an Evasion Attack on a Deep Neural Network

Hyun Kwon; Hyunsoo Yoon; Daeseon Choi

Deep neural networks (DNNs) perform effectively in machine learning tasks such as image recognition, intrusion detection, and pattern analysis. Recently proposed adversarial examples—slightly modified data that lead to incorrect classification—are a severe threat to the security of DNNs. However, in some situations, adversarial examples might be useful, i.e., for deceiving an enemy classifier on a battlefield. In that case, friendly classifiers should not be deceived. In this paper, we propose adversarial examples that are friend-safe, which means that friendly machines can classify the adversarial example correctly. To make such examples, the transformation is carried out to minimize the friend’s wrong classification and the adversary’s correct classification. We suggest two configurations of the scheme: targeted and untargeted class attacks. In experiments using the MNIST dataset, the proposed method shows a 100% attack success rate and 100% friendly accuracy with little distortion (2.18 and 1.53 for each configuration, respectively). Finally, we propose a mixed battlefield application and a new covert channel scheme.


Molecular Crystals and Liquid Crystals | 1999

Surface Properties of Metallophthalocyanine LB Films and their Sensing Applications

Young-Jin Lee; Hyun Kwon; Su-An Choi; Young Keun Chang; Young H. Chang; Jong-Duk Kim

Abstract Multilayer thin films of H2Pc and CuPc were coated on QCM by the LB method as to be used as NO2 sensing materials. The mechanism of NO2 desorption on both Pc sensing films was investigated using XPS and FTIR.


IEICE Transactions on Information and Systems | 2018

CAPTCHA Image Generation Systems Using Generative Adversarial Networks

Hyun Kwon; Yongchul Kim; Hyunsoo Yoon; Daeseon Choi


IEICE Transactions on Information and Systems | 2018

Advanced Ensemble Adversarial Example on Unknown Deep Neural Network Classifiers

Hyun Kwon; Yongchul Kim; Ki-Woong Park; Hyunsoo Yoon; Daeseon Choi


IEEE Access | 2018

Multi-Targeted Adversarial Example in Evasion Attack on Deep Neural Network

Hyun Kwon; Yongchul Kim; Ki-Woong Park; Hyunsoo Yoon; Daeseon Choi


생물공정연구센터 연례 심포지움 | 1997

NO2 gas sensing properties of QCM sensor using H2Pc LB Films

Hyun Kwon; Yj Lee; Sa Choi; James Kim; YongKeun Chang


The 3rd East Asian Conference on Chemical Sensors | 1997

Performance of metallophthalocyanine LB membrane as a NO2 sensing material

Hyun Kwon; Yj Lee; Sa Choi; Yong-Keun Chang; Jong-Duk Kim

Collaboration


Dive into the Hyun Kwon's collaboration.

Top Co-Authors

Avatar

Daeseon Choi

Kongju National University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge