Daeseon Choi
Kongju National University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Daeseon Choi.
computer and communications security | 2018
Hyun Kwon; Hyunsoo Yoon; Daeseon Choi
Deep neural networks (DNNs) exhibit excellent performance in machine learning tasks such as image recognition, pattern recognition, speech recognition, and intrusion detection. However, the usage of adversarial examples, which are intentionally corrupted by noise, can lead to misclassification. As adversarial examples are serious threats to DNNs, both adversarial attacks and methods of defending against adversarial examples have been continuously studied. Zero-day adversarial examples are created with new test data and are unknown to the classifier; hence, they represent a more significant threat to DNNs. To the best of our knowledge, there are no analytical studies in the literature of zero-day adversarial examples with a focus on attack and defense methods through experiments using several scenarios. Therefore, in this study, zero-day adversarial examples are practically analyzed with an emphasis on attack and defense methods through experiments using various scenarios composed of a fixed target model and an adaptive target model. The Carlini method was used for a state-of-the-art attack, while an adversarial training method was used as a typical defense method. We used the MNIST dataset and analyzed success rates of zero-day adversarial examples, average distortions, and recognition of original samples through several scenarios of fixed and adaptive target models. Experimental results demonstrate that changing the parameters of the target model in real time leads to resistance to adversarial examples in both the fixed and adaptive target models.
Proceedings of the First Workshop on Radical and Experiential Security | 2018
Gwonsang Ryu; Sohee Park; Daeseon Choi; Youngsam Kim; Seung-Hyun Kim; Soo Hyung Kim; Dowan Kim; Daeyong Kwon
Active authentication is a user authentication scheme that uses the users behavioral characteristics and environmental information collected in the background, to reduce the number of explicit authentication requests. In this paper, we propose active authentication scheme, which performs Grade-Up and Grade-Extend after comparing mobile device users confidence grade and level of application authentication. We collect application usage log, face confidence, Context of Interest (COI) familiarity, placement of mobile device and screen log from 22 participants over approximately 42.5 days. The proposed scheme classifies into 4 authentication levels applications that participants used depending on authentication method of the applications. Our experiments demonstrate that the proposed scheme is able to reduce the number of explicit authentication requests by 49%.
Computers & Security | 2018
Hyun Kwon; Yongchul Kim; Ki-Woong Park; Hyunsoo Yoon; Daeseon Choi
Abstract Deep neural networks (DNNs) have been applied in several useful services, such as image recognition, intrusion detection, and pattern analysis of machine learning tasks. Recently proposed adversarial examples-slightly modified data that lead to incorrect classification-are a severe threat to the security of DNNs. In some situations, however, an adversarial example might be useful, such as when deceiving an enemy classifier on the battlefield. In such a scenario, it is necessary that a friendly classifier not be deceived. In this paper, we propose a friend-safe adversarial example, meaning that the friendly machine can classify the adversarial example correctly. To produce such examples, a transformation is carried out to minimize the probability of incorrect classification by the friend and that of correct classification by the adversary. We suggest two configurations for the scheme: targeted and untargeted class attacks. We performed experiments with this scheme using the MNIST and CIFAR10 datasets. Our proposed method shows a 100% attack success rate and 100% friend accuracy with only a small distortion: 2.18 and 1.54 for the two respective MNIST configurations, and 49.02 and 27.61 for the two respective CIFAR10 configurations. Additionally, we propose a new covert channel scheme and a mixed battlefield application for consideration in further applications.
workshop on information security applications | 2017
Namsup Lee; Hyunsoo Yoon; Daeseon Choi
We propose an online game money chargeback fraud detection method using operation sequence, gradient of charge/purchase amount, time and country as features of a transaction. We model the sequence of transactions with a recurrent neural network which also combines charge and purchase transaction features in single feature vector. In experiments using real data (a 483,410 transaction log) from a famous online game company in Korea, the proposed method shows a 78% recall rate with a 0.057% false positive rate. This recall rate is 7% better than current methodology utilizing transaction statistics as features.
international conference on information security and cryptology | 2017
Hyun Kwon; Hyunsoo Yoon; Daeseon Choi
Deep neural networks (DNNs) perform effectively in machine learning tasks such as image recognition, intrusion detection, and pattern analysis. Recently proposed adversarial examples—slightly modified data that lead to incorrect classification—are a severe threat to the security of DNNs. However, in some situations, adversarial examples might be useful, i.e., for deceiving an enemy classifier on a battlefield. In that case, friendly classifiers should not be deceived. In this paper, we propose adversarial examples that are friend-safe, which means that friendly machines can classify the adversarial example correctly. To make such examples, the transformation is carried out to minimize the friend’s wrong classification and the adversary’s correct classification. We suggest two configurations of the scheme: targeted and untargeted class attacks. In experiments using the MNIST dataset, the proposed method shows a 100% attack success rate and 100% friendly accuracy with little distortion (2.18 and 1.53 for each configuration, respectively). Finally, we propose a mixed battlefield application and a new covert channel scheme.
IEICE Transactions on Information and Systems | 2018
Hyun Kwon; Yongchul Kim; Hyunsoo Yoon; Daeseon Choi
WOOT'16 Proceedings of the 10th USENIX Conference on Offensive Technologies | 2016
Daeseon Choi; Younho Lee
computer and communications security | 2018
Hosung Park; Daeyong Kwon; Seungsoo Nam; Daeseon Choi
IEICE Transactions on Information and Systems | 2018
Hyun Kwon; Yongchul Kim; Ki-Woong Park; Hyunsoo Yoon; Daeseon Choi
IEEE Access | 2018
Daeseon Choi; Younho Lee