Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Daeseon Choi is active.

Publication


Featured researches published by Daeseon Choi.


computer and communications security | 2018

POSTER: Zero-Day Evasion Attack Analysis on Race between Attack and Defense

Hyun Kwon; Hyunsoo Yoon; Daeseon Choi

Deep neural networks (DNNs) exhibit excellent performance in machine learning tasks such as image recognition, pattern recognition, speech recognition, and intrusion detection. However, the usage of adversarial examples, which are intentionally corrupted by noise, can lead to misclassification. As adversarial examples are serious threats to DNNs, both adversarial attacks and methods of defending against adversarial examples have been continuously studied. Zero-day adversarial examples are created with new test data and are unknown to the classifier; hence, they represent a more significant threat to DNNs. To the best of our knowledge, there are no analytical studies in the literature of zero-day adversarial examples with a focus on attack and defense methods through experiments using several scenarios. Therefore, in this study, zero-day adversarial examples are practically analyzed with an emphasis on attack and defense methods through experiments using various scenarios composed of a fixed target model and an adaptive target model. The Carlini method was used for a state-of-the-art attack, while an adversarial training method was used as a typical defense method. We used the MNIST dataset and analyzed success rates of zero-day adversarial examples, average distortions, and recognition of original samples through several scenarios of fixed and adaptive target models. Experimental results demonstrate that changing the parameters of the target model in real time leads to resistance to adversarial examples in both the fixed and adaptive target models.


Proceedings of the First Workshop on Radical and Experiential Security | 2018

Active Authentication Experiments Using Actual Application Usage Log

Gwonsang Ryu; Sohee Park; Daeseon Choi; Youngsam Kim; Seung-Hyun Kim; Soo Hyung Kim; Dowan Kim; Daeyong Kwon

Active authentication is a user authentication scheme that uses the users behavioral characteristics and environmental information collected in the background, to reduce the number of explicit authentication requests. In this paper, we propose active authentication scheme, which performs Grade-Up and Grade-Extend after comparing mobile device users confidence grade and level of application authentication. We collect application usage log, face confidence, Context of Interest (COI) familiarity, placement of mobile device and screen log from 22 participants over approximately 42.5 days. The proposed scheme classifies into 4 authentication levels applications that participants used depending on authentication method of the applications. Our experiments demonstrate that the proposed scheme is able to reduce the number of explicit authentication requests by 49%.


Computers & Security | 2018

Friend-safe evasion attack: An adversarial example that is correctly recognized by a friendly classifier

Hyun Kwon; Yongchul Kim; Ki-Woong Park; Hyunsoo Yoon; Daeseon Choi

Abstract Deep neural networks (DNNs) have been applied in several useful services, such as image recognition, intrusion detection, and pattern analysis of machine learning tasks. Recently proposed adversarial examples-slightly modified data that lead to incorrect classification-are a severe threat to the security of DNNs. In some situations, however, an adversarial example might be useful, such as when deceiving an enemy classifier on the battlefield. In such a scenario, it is necessary that a friendly classifier not be deceived. In this paper, we propose a friend-safe adversarial example, meaning that the friendly machine can classify the adversarial example correctly. To produce such examples, a transformation is carried out to minimize the probability of incorrect classification by the friend and that of correct classification by the adversary. We suggest two configurations for the scheme: targeted and untargeted class attacks. We performed experiments with this scheme using the MNIST and CIFAR10 datasets. Our proposed method shows a 100% attack success rate and 100% friend accuracy with only a small distortion: 2.18 and 1.54 for the two respective MNIST configurations, and 49.02 and 27.61 for the two respective CIFAR10 configurations. Additionally, we propose a new covert channel scheme and a mixed battlefield application for consideration in further applications.


workshop on information security applications | 2017

Detecting Online Game Chargeback Fraud Based on Transaction Sequence Modeling Using Recurrent Neural Network

Namsup Lee; Hyunsoo Yoon; Daeseon Choi

We propose an online game money chargeback fraud detection method using operation sequence, gradient of charge/purchase amount, time and country as features of a transaction. We model the sequence of transactions with a recurrent neural network which also combines charge and purchase transaction features in single feature vector. In experiments using real data (a 483,410 transaction log) from a famous online game company in Korea, the proposed method shows a 78% recall rate with a 0.057% false positive rate. This recall rate is 7% better than current methodology utilizing transaction statistics as features.


international conference on information security and cryptology | 2017

Friend-Safe Adversarial Examples in an Evasion Attack on a Deep Neural Network

Hyun Kwon; Hyunsoo Yoon; Daeseon Choi

Deep neural networks (DNNs) perform effectively in machine learning tasks such as image recognition, intrusion detection, and pattern analysis. Recently proposed adversarial examples—slightly modified data that lead to incorrect classification—are a severe threat to the security of DNNs. However, in some situations, adversarial examples might be useful, i.e., for deceiving an enemy classifier on a battlefield. In that case, friendly classifiers should not be deceived. In this paper, we propose adversarial examples that are friend-safe, which means that friendly machines can classify the adversarial example correctly. To make such examples, the transformation is carried out to minimize the friend’s wrong classification and the adversary’s correct classification. We suggest two configurations of the scheme: targeted and untargeted class attacks. In experiments using the MNIST dataset, the proposed method shows a 100% attack success rate and 100% friendly accuracy with little distortion (2.18 and 1.53 for each configuration, respectively). Finally, we propose a mixed battlefield application and a new covert channel scheme.


IEICE Transactions on Information and Systems | 2018

CAPTCHA Image Generation Systems Using Generative Adversarial Networks

Hyun Kwon; Yongchul Kim; Hyunsoo Yoon; Daeseon Choi


WOOT'16 Proceedings of the 10th USENIX Conference on Offensive Technologies | 2016

Eavesdropping one-time tokens over magnetic secure transmission in Samsung pay

Daeseon Choi; Younho Lee


computer and communications security | 2018

POSTER: Address Authentication Based on Location History

Hosung Park; Daeyong Kwon; Seungsoo Nam; Daeseon Choi


IEICE Transactions on Information and Systems | 2018

Advanced Ensemble Adversarial Example on Unknown Deep Neural Network Classifiers

Hyun Kwon; Yongchul Kim; Ki-Woong Park; Hyunsoo Yoon; Daeseon Choi


IEEE Access | 2018

Eavesdropping of Magnetic Secure Transmission Signals and Its Security Implications for a Mobile Payment Protocol

Daeseon Choi; Younho Lee

Collaboration


Dive into the Daeseon Choi's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Daeyong Kwon

Kongju National University

View shared research outputs
Top Co-Authors

Avatar

Younho Lee

Seoul National University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Dowan Kim

Kongju National University

View shared research outputs
Top Co-Authors

Avatar

Gwonsang Ryu

Kongju National University

View shared research outputs
Top Co-Authors

Avatar

Hosung Park

Kongju National University

View shared research outputs
Researchain Logo
Decentralizing Knowledge