Jin-Woo Chung
KAIST
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jin-Woo Chung.
web intelligence, mining and semantics | 2013
Jin-Woo Chung; Hye-Jin Min; Joonyeob Kim; Jong C. Park
Deaf people have particular difficulty in understanding text-based web documents because their mother language, or sign language, is essentially visually oriented. To enhance the readability of text-based web documents for deaf people, we propose a news display system that converts complex sentences in news articles into simple sentences and presents the relations among them with a graphical representation. In particular, we focus on the tasks of 1) identifying subordinate and embedded clauses in complex sentences, 2) relocating them for better readability and 3) displaying the relations among the clauses with the graphical representation. The results of our evaluation show that the proposed system does simplify complex sentences in news articles effectively while maintaining their intended meaning, suggesting that our system can be used in practice to help deaf people to access textual information.
international joint conference on artificial intelligence | 2017
Jin-Woo Chung; Wonsuk Yang; Jinseon You; Jong C. Park
Automatic event location extraction from text plays a crucial role in many applications such as infectious disease surveillance and natural disaster monitoring. The fundamental limitation of previous work such as SpaceEval is the limited scope of extraction, targeting only at locations that are explicitly stated in a syntactic structure. This leads to missing a lot of implicit information inferable from context in a document, which amounts to nearly 40% of the entire location information. To overcome this limitation for the first time, we present a system that infers the implicit event locations from a given document. Our system exploits distributional semantics, based on the hypothesis that if two events are described by similar expressions, it is likely that they occur in the same location. For example, if “A bomb exploded causing 30 victims” and “many people died from terrorist attack in Boston” are reported in the same document, it is highly likely that the bomb exploded in Boston. Our system shows good performance of a 0.58 F1-score, where state-of-the-art classifiers for intra-sentential spatiotemporal relations achieve around 0.60 F1-scores.
Proceedings of BioNLP 15 | 2015
Rize Jin; Jinseon You; Jin-Woo Chung; Hee-Jin Lee; Maria Wolters; Jong C. Park
Clinical depression is a mental disorder involving genetics and environmental factors. Although much work studied its genetic causes and numerous candidate genes have consequently been looked into and reported in the biomedical literature, no gene expression changes or mutations regarding depression have yet been adequately collected and analyzed for its full pathophysiology. In this paper, we present a depression-specific annotated corpus for text mining systems that target at providing a concise review of depression-gene relations, as well as capturing complex biological events such as gene expression changes. We describe the annotation scheme and the conducted annotation procedure in detail. We discuss issues regarding proper recognition of depression terms and entity interactions for future approaches to the task. The corpus is available at http://www.biopathway.org/CoMAGD.
robot and human interactive communication | 2013
Hye-Jin Min; Sang-Chae Kim; Joonyeob Kim; Jin-Woo Chung; Jong C. Park
Robot storytelling has the potential for its practical use in various domains such as entertainment, education, and rehabilitation. However, relying on human-recorded voices for natural storytelling is costly, and automation with text-to-speech systems is not readily applicable due to the difficulty of reflecting the full nature of stories in TTS systems. In this paper, we address the problem of automating robot storytelling with a particular focus on two issues: speaker identification and speaker-TTS voice mapping. We first conduct text analysis with rich linguistic clues to identify speakers from a given textual story. We then consider the task of speaker-TTS voice mapping as the graph coloring problem and propose effective algorithms for assigning voices to speakers given a limited number of TTS voices. Finally, we perform a user experiment on validating the usefulness of our method. The results demonstrate that our system significantly outperforms baseline systems and is also more acceptable to users.
web intelligence, mining and semantics | 2011
Jin-Woo Chung; Hojoon Lee; Jong C. Park
In this paper, we describe how to improve accessibility for the aurally challenged in a web environment, focusing on utilizing a signing avatar for web pages. Many systems were previously proposed to make a web environment more accessible for the deaf people by providing signed expressions, i.e. translating written text into sign language animations and presenting them in a proper way, based on the observation that deaf users normally have much difficulty understanding text-based information as well as audio contents. We analyze the strengths and weaknesses of these systems with respect to discussed design criteria, and propose a system that presents a signing avatar for web page documents via a mobile device, which is expected to overcome the shortcomings of the previous systems and to improve the accessibility of deaf users to textual contents in a web environment. The proposed system has three main parts based on a client-server architecture: 1) a client that executes a web browser and transmits selected text to the server, 2) a server that takes text as input and translates it into signed expressions through a sign language generation module, and 3) a mobile device that displays signing animation transmitted from the server by streaming. We also present some linguistic issues raised by the difference between Korean and Korean Sign Language. To the best of our knowledge, this is the first approach to the use of a mobile device for web document access by the aurally challenged people. We discuss implications of our study and future directions.
The Association for Computational Linguistics | 2015
Rize Jin; Jinseon You; Jin-Woo Chung; Hee-Jin Lee; Maria Wolters; Jong Park
Journal of the HCI Society of Korea | 2010
Jin-Woo Chung
Language and Information | 2009
Jin-Woo Chung; Jong-Cheol Park
international joint conference on natural language processing | 2017
Jinseon You; Jin-Woo Chung; Wonsuk Yang; Jong C. Park
pacific asia conference on language information and computation | 2015
Jin-Woo Chung; Jinseon You; Jong C. Park