Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Byeongkyu Ko is active.

Publication


Featured researches published by Byeongkyu Ko.


soft computing | 2014

A method of DDoS attack detection using HTTP packet pattern and rule engine in cloud computing environment

Junho Choi; Chang Choi; Byeongkyu Ko; Pankoo Kim

Cloud computing is a more advanced technology for distributed processing, e.g., a thin client and grid computing, which is implemented by means of virtualization technology for servers and storages, and advanced network functionalities. However, this technology has certain disadvantages such as monotonous routing for attacks, easy attack method, and tools. This means that all network resources and operations are blocked all at once in the worst case. Various studies such as pattern analyses and network-based access control for infringement response based on Infrastructure as a Service, Platform as a Service and Software as a Service in cloud computing services have therefore been recently conducted. This study proposes a method of integration between HTTP GET flooding among Distributed Denial-of-Service attacks and MapReduce processing for fast attack detection in a cloud computing environment. In addition, experiments on the processing time were conducted to compare the performance with a pattern detection of the attack features using Snort detection based on HTTP packet patterns and log data from a Web server. The experimental results show that the proposed method is better than Snort detection because the processing time of the former is shorter with increasing congestion.


Journal of Network and Computer Applications | 2014

Text analysis for detecting terrorism-related articles on the web

Dongjin Choi; Byeongkyu Ko; Heesun Kim; Pankoo Kim

Classifying web documents is considered as one of the most important tasks to reveal the terrorism-related documents. Internet provides a lot of valuable information to the users and the amount of web contents is progressively increasing. This makes it very difficult to identify potentially dangerous documents. Simply extracting keywords from documents is not enough to classify the contents. To build automated document classification systems, many techniques have been studied so far, but they are mostly statistical and knowledge-based approaches. These methods, however, do not yield satisfactory results because of complexity of natural languages. To overcome this deficiency, we propose a method to use word similarity based on WordNet hierarchy and n-gram data frequency. This method was tested with the sampled New York Times articles by querying four distinct words from four different areas. Experimental results show our proposed method effectively extracts context words from the text and identifies terrorism-related documents.


network based information systems | 2012

Detection of cross site scripting attack in wireless networks using n-Gram and SVM

Junho Choi; Chang Choi; Byeongkyu Ko; Pankoo Kim

Large parts of attacks targeting the web are aiming at the weak point of web application. Even though SQL injection, which is the form of XSS Cross Site Scripting attacks, is not a threat to the system to operate the web site, it is very critical to the places that deal with the important information because sensitive information can be obtained and falsified. In this paper, the method to detect themalicious SQL injection script code which is the typical XSS attack using n-Gram indexing and SVM Support Vector Machine is proposed. In order to test the proposed method, the test was conducted after classifying each data set as normal code and malicious code, and the malicious script code was detected by applying index term generated by n-Gram and data set generated by code dictionary to SVM classifier. As a result, when the malicious script code detection was conducted using n-Gram index term and SVM, the superior performance could be identified in detecting malicious script and the more improved results than existing methods could be seen in the malicious script code detection recall.


International Journal of Distributed Sensor Networks | 2014

Extracting User Interests on Facebook

Jeongin Kim; Dongjin Choi; Byeongkyu Ko; Eunji Lee; Pankoo Kim

Recently, through the rapid development of smart devices, Facebook has been publicly recognized as a representative social network service. Facebook has profile information that is the basis to form network relationship between people as well as closed information sharing between users. Also, social plug-in “Like” is a feature that Facebook includes. Current researches based on Facebook have a problem of not considering this “Like” feature. This paper has proposed the method of extracting user’s interest by using Term Frequency of nouns and “Likes.” “Posts” and “Likes” were collected through Facebook Open API. Collected “Posts” were preprocessed and Term Frequency of nouns was calculated. After calculating weights of user interests by using Term Frequency of nouns and “Likes,” user interests were extracted by higher ranked user interests weighting.


Archive | 2011

Solving English Questions through Applying Collective Intelligence

Dongjin Choi; Myunggwon Hwang; Byeongkyu Ko; Pankoo Kim

Many researchers have been using n-gram statistics which is providing statistical information about cohesion among words to extract semantic information in web documents. Also, the n-gram has been applied in spell checking system, prediction of user interest and so on. This paper is a fundamental research to estimate lexical cohesion in documents using trigram, 4gram and 5gram offered by Google. The main purpose of this paper is estimating possibilities of Google n-gram using TOEIC question data sets.


network-based information systems | 2012

Automatic Evaluation of Document Classification Using N-Gram Statistics

Dongjin Choi; Byeongkyu Ko; Eunji Lee; Myunggwon Hwang; Pankoo Kim

Due to the development of World Wide Web technologies, people are living in the place flooding trillions of web pages in every moment. The amount of web size has been increasing dramatically. For this reason, it is getting more difficult to find relevant web documents corresponding to what users want to read. Classifying documents into predefined categories is one of the most important tasks in Natural Language Processing field. Over the years, many statistical and linguistical approaches have been applied to overcome traditional classification machine. However, it still remains in unsolved problem. There is a no perfect solution to machine understand human language yet. We have to consider every possibility for making machine think like human does. In this paper, we propose a method for classifying textural document using n-gram co-occurrence statistics which have a great possibility to find similarities between given documents. We also compare our proposed method with traditional method suggested by Keselj. This paper only covers simple approaches and still needs more sophisticated experiments. However, the performance using this method is better than the Keselj approach.


Archive | 2011

An Automatic Method for WordNet Concept Enrichment using Wikipedia Titles

Myunggwon Hwang; Dongjin Choi; Byeongkyu Ko; Junho Choi; Pankoo Kim

Knowledge bases such as WordNet are positively utilized for semantic information processing. However, much research indicates that the existing knowledge bases cannot cover all of concepts used in talking and writing in real world. To solve this limitation, this research suggests a method which enriches WordNet concepts through analyzing Wikipedia document set. Wikipedia currently contains documents more than 3.2 million and it describes tangible and intangible objects in detail. Moreover, it is continuously grown with new subjects and contents by domain-specific specialists. Therefore, the Wikipedia contents can be usefully used for knowledge base enrichment.


acm symposium on applied computing | 2015

Application-level task execution issues in mobile cloud computing

Abida Shahzad; Hyunho Ji; Pankoo Kim; Hanil Kim; Byeongkyu Ko; Jiman Hong

In order to maximize the throughput of computation intensive applications in smartphones; Mobile Cloud Computing (MCC) architecture was introduced in which mobile devices are connected to the adjacent cloud servers for the execution of such applications. Many researchers has been focusing on the division and execution of applications between mobile and cloud; in order to maximize the throughput and minimize the work load on mobile. In this paper we present some of the comparison issues in the augmented execution of applications among cloud and mobile. We conclude this paper with a critical analysis of challenges that have not yet been fully met, and highlight directions for future work.


research in adaptive and convergent systems | 2014

A semantic weighting method for document classification based on Markov logic networks

Eunji Lee; Jeongin Kim; Junho Choi; Chang Choi; Byeongkyu Ko; Pankoo Kim

This paper proposes a semantic weighting method to classify textural documents. Human lives in the world where web documents have a great potential and the amount of valuable information has been consistently growing over the year. There is a problem that finding relevant web documents corresponding to what users want is more difficult due to the huge amount of web size. For this reason, there have been many researchers overcome this problem. The most important thing is document classification. All documents are composed of numerous words. Many classification methods have been extracted keywords from documents and then analyzed keywords pattern or frequency. In this paper, we propose Category Term Weight (CTW) using keywords from documents in order to enhance performance in document classification. CTW combines keywords frequency with semantic information. The frequency and semantic information have a great potential to find similarities between documents. That is why we calculates CTW from collection of training documents. After this step, CTW from unknown document and CTW in previous Category Term Database will be applied by designed Markov Logic Networks Model. Our designed MLNs Model and existing Naive-bayse Model will be compared by applied CTW. The experimental results shows the improvement of precision compare with the existing model.


international conference on hci in business | 2015

Low Ambiguity First Algorithm: A New Approach to Knowledge-Based Word Sense Disambiguation

Dongjin Choi; Myunggwon Hwang; Byeongkyu Ko; Sicheon You; Pankoo Kim

The Word Sense Disambiguation (WSD) problem has been considered as one of the most important challenging task in Natural Language Processing (NLP) research area. Even though, many of scientists applied the robust machine learning, statistical techniques, and structural pattern matching approach, the performance of WSD is still not able to bit human results due to the complexity of human language. In order to overcome this limitation, currently, the knowledge base such as WordNet has gained high popularity among researchers due to the fact that this knowledge base can extensively provide not only the definitions of nouns and verbs, but also the semantic networks between senses which were defined by linguists. However, knowledge bases are not fully dealing with entire words of human languages because maintaining and expanding the knowledge base is huge task which requires many efforts and time. Expanding knowledge base is not a big issue to concern however, a new approach is the major goal of this paper to solve WSD problem only based on limited knowledge resources. In this paper, we propose a method, named low ambiguity first (LAF) algorithm, which disambiguates a polysemous word with a low ambiguity degree first with given disambiguated words, based on the structural semantic interconnections (SSI) approach. The LAF algorithm is based on the two hypothesises that first, adjacent words are semantically relevant than other words far way. Second, word ambiguity can be measured by frequency differences between synsets of the given word in WordNet. We have proved these hypothesises in the experiment results, the LAF algorithm can improve the performance of traditional WSD results.

Collaboration


Dive into the Byeongkyu Ko's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Myunggwon Hwang

Korea Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hanil Kim

Jeju National University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge