Kinam Park
Korea University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Kinam Park.
Multimedia Tools and Applications | 2012
Kinam Park; Hyesung Jee; Taemin Lee; Soonyoung Jung; Heuiseok Lim
Web search users complain of the inaccurate results produced by current search engines. Most of these inaccurate results are due to a failure to understand the user’s search goal. This paper proposes a method to extract users’ intentions and to build an intention map representing these extracted intentions. The proposed method makes intention vectors from clicked pages from previous search logs obtained on a given query. The components of the intention vector are weights of the keywords in a document. It extracts user’s intentions by using clustering the intention vectors and extracting intention keywords from each cluster. The extracted the intentions on a query are represented in an intention map. For the efficiency analysis of intention map, we extracted user’s intentions using 2,600 search log data a current domestic commercial search engine. The experimental results with a search engine using the intention maps show statistically significant improvements in user satisfaction scores.
international conference on information technology | 2010
Kinam Park; Taemin Lee; Soonyoung Jung; Sangyep Nam; Heuiseok Lim
Web search users complain of inaccurate results of the current search engines. Most of inaccurate results are from failing to understand user???s search goal. This paper proposes a method to mine user???s intentions and to build an intention map representing their information needs. It selects intention features from search logs obtained from previous search sessions on a given query and extracts user???s intentions by using clustering and labeling algorithms. The mined user???s intentions on the query are represented in an intention map. For the efficiency analysis of intention maps, we extracted user intentions using 2,600 search log data of a current domestic commercial web search engine. The experimental results using a web search engine with the intention maps show statistically significant improvements in user satisfaction scores.
soft computing | 2018
Jeong Eun Kim; Kinam Park; Jeong Min Chae; Hong Jun Jang; Byoung Wook Kim; Soon Young Jung
The researches on the automatic scoring system for English descriptive answers have been actively performed, but there are not so many researches on the automatic scoring system for Korean descriptive answers. In this paper, we propose an scoring method based on lexico-semantic pattern (LSP), which is known to be a good solution for the morphologically rich Korean language. In the proposed method, postposition information is utilized as an important tool for finding the meaning differences in Korean. In addition to using LSP, we also applied a synonym dictionary as a meaning extension approach to improve recall performance in scoring student’s answer. Our experimental result shows that the proposed system performs better than the existing noun-keyword-based system by 0.137. Also, the best performance could be obtained by using a synonym dictionary.
Cluster Computing | 2017
YoungHee Jung; Kinam Park; Taemin Lee; Jeongmin Chae; Soonyoung Jung
Recently, social network services have become the popular communication tools among internet and mobile users. And it has been shared various opinions, which could be included various emotions. Emotion analysis aims to extract various emotion information, such as joy, happy, funny, fear, sad, and lonely, and so on, from texts expressed in natural language. Previous studies about emotion analysis on texts written in Korean have focused generally on the basic sentiments such as positive/neutral/negative preferences or 4–10 emotion classes. In this paper, we propose an emotion analysis method based on supervised learning to classify various emotions from messages written in Korean. We had found the feature set optimized to each emotion class through evaluating the combinations of various linguistic features and built a model to classify the emotion using the optimized feature set. To do this, it was constructed the corpus that is manually annotated with 25 emotion classes. We performed a 10-fold cross variation experiment for evaluating the performance of the proposed method. Our method obtained F-value ranged from 73.1 to 98.0% for each of 25 emotion classes. The optimized feature sets for most of emotion classes include commonly word 2-gram, POS 1-gram, and character 1-gram feature.
international conference on information technology | 2010
Wonhee Yu; Kinam Park; Soonyoung Jung; Heuiseok Lim
This paper proposes a computational lexical entry acquisition model based on a representation model of the mental lexicon. The proposed model acquires lexical entries from a raw corpus by unsupervised learning like human. The model is composed of full-form and morpheme acquisition modules. In the full-from acquisition module, core full-forms are automatically acquired according to the frequency and recency thresholds. In the morpheme acquisition module, a repeatedly occurring substring in different full-forms is chosen as a candidate morpheme. Then, the candidate is corroborated as a morpheme by using the entropy measure of syllables in the string. The experimental results with a Korean corpus of which size is about 16 million full-forms show that the model successively acquires major full-forms and morphemes with the precision of 100% and 99.04%, respectively.
2010 Proceedings of the 5th International Conference on Ubiquitous Information Technologies and Applications | 2010
Doo Soon Park; Wonhee Yu; Kinam Park; Heui Seok Lim
This paper proposes a computational lexical entry acquisition model based on a representation model of the mental lexicon. The proposed model acquires lexical entries from a raw corpus by unsupervised learning like human. The model is composed of full-form and morpheme acquisition modules. We experimented the model with a Korean raw corpus of which size is about 16 million Korean full-forms. The experimental results show that the model successively acquires major Korean full-forms and morphemes with the average precision of 100% and 99.04%, respectively.
international conference on future generation communication and networking | 2008
Kinam Park; Wonhee Yu; Heuiseok Lim; Soonyoung Jung
This study is intended to design and implement an automatic lexical acquisition model based on cognitive neuroscience referring to the theory that mental lexicon structure is represented with full-listing and morphemic and that lexical forms accessed to mental lexicon upon word cognition takes a hybrid type. As the result of the study, we could simulate the lexical acquisition process of linguistic input through experiments and studying, and suggest a theoretical foundation for the order of acquitting certain grammatical categories. Also, the model of this study has shown proofs with which we can infer the type of the mental lexicon of the human cerebrum through full-list dictionary and decomposition dictionary which were automatically produced in the study.
international conference on future generation communication and networking | 2008
Jeongmin Chae; Kinam Park; YoungHee Jung; Soonyoung Jung; Jieun Chae; Heung-Bum Oh
The HLA control a variety of function involved in immune response and influence susceptibility to over 40 diseases. It is important to find out how HLA cause the disease or modify susceptibility or course of it. In this paper, we developed an automatic HLA-disease information extraction procedure that uses biomedical publications. First, HLA and diseases are recognized in the literature using built-in regular languages and disease categories of Mesh. Second, we generated parse trees for each sentence in PubMed using collins parser. Third, we build our own information extraction algorithm. The algorithm searched parsing trees and extracted relation information from sentences. The precision rate of extracted relations reported 89.6 in randomly selected 144 sentences.
international conference on natural computation | 2007
Kinam Park; Kigon Lyu; Wonhee Yu; Heuiseok Lim
In this paper, we experimented word frequency effect, word similarity effect and semantic priming effect among many language phenomena appears when Lexical Decision Task is progressing independently of language by applying them to 2 syllable words of Korean language through human being and connectionist model, and we compared and analyzed the result. The experiment shows that there were word frequency effect, word similarity effect and semantic priming effect for each human being and connectionist model and, the results of experiment behavioral and connectionist model exposed meaningful similarity.
international conference on neural information processing | 2006
Youan Kwon; Kinam Park; Heuiseok Lim; Kichun Nam; Soonyoung Jung
In this paper, we investigate whether the word frequency effect and the word similarity effect could be applied to Korean lexical decision task (henceforth, LDT). Also we propose a computational model of Korean LDT and present comparison results between human and computational model on Korean LDT. We found that the word frequency effect and the similarity effect in Korean LDT were language general phenomena in both the behavioral experiment and the proposed computational simulation.