Jeesoo Bang
Pohang University of Science and Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jeesoo Bang.
international conference on big data and smart computing | 2015
Jeesoo Bang; Hyungjong Noh; Yonghee Kim; Gary Geunbae Lee
This study introduces an example-based chat-oriented dialogue system with personalization framework using long-term memory. Previous representative chat-bots use simple keyword and pattern matching methodologies. To maintain the quality of systems, generating numerous heuristic rules with human labour is inevitable. The language expert knowledge is also necessary to build those rules and matching patterns. To avoid high annotation cost, example-based dialogue management is adopted for building chat-oriented dialogue system. We also propose three features: POS-tagged tokens for sentence matching, using NE types and values for searching proper responses, and using back-off responses for unmatched user utterances. Also, our system automatically collects user-related facts from user input sentences and stores the facts into a long-term memory. System responses can be modified by applying user-related facts in the long-term memory. A relevance score of a system response is proposed to select responses that include user-related fact, or frequently used responses. In several experiments, we have found that our proposed features contribute to improve the performance and our system shows competitive performance to ALICE system with the same training corpus.
annual meeting of the special interest group on discourse and dialogue | 2015
Sangdo Han; Jeesoo Bang; Seonghan Ryu; Gary Geunbae Lee
We developed a natural language dialog listening agent that uses a knowledge base (KB) to generate rich and relevant responses. Our system extracts an important named entity from a user utterance, then scans the KB to extract contents related to this entity. The system can generate diverse and relevant responses by assembling the related KB contents into appropriate sentences. Fifteen students tested our system; they gave it higher approval scores than they gave other systems. These results demonstrate that our system generated various responses and encouraged users to continue talking.
conference of the international speech communication association | 2014
Yonghee Kim; Jeesoo Bang; Junhwi Choi; Seonghan Ryu; Sangjun Koo; Gary Geunbae Lee
This study introduces a personalization framework for dialog systems. Our system automatically collects user-related facts (i.e. triples) from user input sentences and stores the facts in one-shot memory. The system also keeps track of changes in user interests. Extracted triples and entities (i.e. NP-chunks) are stored in a personal knowledge base (PKB) and a forgetting model manages their retention (i.e. interest). System responses can be modified by applying user-related facts to the one-shot memory. A relevance score of a system response is proposed to select responses that include high-retention triples and entities, or frequently used responses. We used Movie-Dic corpus to construct a simple dialog system and train PKBs. The retention sum of responses was increased by adopting the PKB, and the number of inappropriate responses was decreased by adopting relevance score. The system gave some personalized responses, while maintaining its performance (i.e. appropriateness of responses).
ACM Transactions on Asian Language Information Processing | 2014
Jeesoo Bang; Jonghoon Lee; Gary Geunbae Lee; Minhwa Chung
This article presents an approach to nonnative pronunciation variants modeling and prediction. The pronunciation variants prediction method was developed by generalized transformation-based error-driven learning (GTBL). The modified goodness of pronunciation (GOP) score was applied to effective mispronunciation detection using logistic regression machine learning under the pronunciation variants prediction. English-read speech data uttered by Korean-speaking learners of English were collected, then pronunciation variation knowledge was extracted from the differences between the canonical phonemes and the actual phonemes of the speech data. With this knowledge, an error-driven learning approach was designed that automatically learns phoneme variation rules from phoneme-level transcriptions. The learned rules generate an extended recognition network to detect mispronunciations. Three different mispronunciation detection methods were tested including our logistic regression machine learning method with modified GOP scores and mispronunciation preference features; all three methods yielded significant improvement in predictions of pronunciation variants, and our logistic regression method showed the best performance.
ieee automatic speech recognition and understanding workshop | 2015
Jeesoo Bang; Sangdo Han; Kyusong Lee; Gary Geunbae Lee
We built a personalized example-based dialog system that constructs its responses by considering entities that the user has uttered, and topics in which the user has expressed interest. The system analyzes user input utterances, then uses DBpedia and Freebase to extract relevant entities and topics. The extracted entities and topics are stored in personal knowledge memory and are used when the system selects responses from the example database and generates responses. We conducted a human experiment in which evaluators rated dialog systems based on subjective metrics. The proposed dialog system that uses topics that are of interest to the user achieved higher evaluation scores for both personalization and satisfaction than the baseline systems. These results demonstrate that the use of topics in the system response provides a sense that the system pays attention to the users utterances; as a consequence the user has a satisfactory dialog experience.
conference of the international speech communication association | 2014
Junhwi Choi; Seonghan Ryu; Kyusong Lee; Yonghee Kim; Sangjun Koo; Jeesoo Bang; Seonyeong Park; Gary Geunbae Lee
We proposed an automatic speech recognition (ASR) error correction method using hybrid word sequence matching and recurrent neural network for dialog system applications. Basically, the ASR errors are corrected by the word sequence matching whereas the remaining OOV (out of vocabulary) errors are corrected by the secondary method which uses a recurrent neural network based syllable prediction. We evaluated our method on a test parallel corpus (Korean) including ASR results and their correct transcriptions. Overall result indicates that the method effectively decreases the word error rate of the ASR results. The proposed method can correct ASR errors only with a text corpus without their speech recognition results, which means that the method is independent to the ASR engine. The method is general and can be applied to any speech based application such as spoken dialog systems.
symposium on languages, applications and technologies | 2013
Jeesoo Bang; Sechun Kang; Gary Geunbae Lee
international conference on acoustics, speech, and signal processing | 2014
Jeesoo Bang; Kyusong Lee; Seonghan Ryu; Gary Geunbae Lee
symposium on languages, applications and technologies | 2013
Jeesoo Bang; Gary Geunbae Lee
Archive | 2013
Gary Geunbae Lee; Hongsuck Seo; Sechun Kang; Jeesoo Bang; Kyusong Lee