Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Woosung Kim is active.

Publication


Featured researches published by Woosung Kim.


ACM Transactions on Asian Language Information Processing | 2004

Lexical triggers and latent semantic analysis for cross-lingual language model adaptation

Woosung Kim; Sanjeev Khudanpur

In-domain texts for estimating statistical language models are not easily found for most languages of the world. We present two techniques to take advantage of in-domain text resources in other languages. First, we extend the notion of <i>lexical triggers</i>, which have been used monolingually for language model adaptation, to the cross-lingual problem, permitting the construction of sharper language models for a target-language document by drawing statistics from related documents in a resource-rich language. Next, we show that <i>cross-lingual latent semantic analysis</i> is similarly capable of extracting useful statistics for language modeling. Neither technique requires explicit translation capabilities between the two languages! We demonstrate significant reductions in both perplexity and word error rate on a Mandarin speech recognition task by using these techniques.


international conference on acoustics, speech, and signal processing | 2004

Cross-lingual latent semantic analysis for language modeling

Woosung Kim; Sanjeev Khudanpur

Statistical language model estimation requires large amounts of domain-specific text, which is difficult to obtain in many languages. We propose techniques which exploit domain-specific text in a resource-rich language to adapt a language model in a resource-deficient language. A primary advantage of our technique is that in the process of cross-lingual language model adaptation, we do not rely on the availability of any machine translation capability. Instead, we assume that only a modest-sized collection of story-aligned document-pairs in the two languages is available. We use ideas from cross-lingual latent semantic analysis to develop a single low-dimensional representation shared by words and documents in both languages, which enables us to (i) find documents in the resource-rich language pertaining to a specific story in the resource-deficient language, and (ii) extract statistics from the pertinent documents to adapt a language model to the story of interest. We demonstrate significant reductions in perplexity and error rates in a Mandarin speech recognition task using this technique.


empirical methods in natural language processing | 2003

Cross-lingual lexical triggers in statistical language modeling

Woosung Kim; Sanjeev Khudanpur

We propose new methods to take advantage of text in resource-rich languages to sharpen statistical language models in resource-deficient languages. We achieve this through an extension of the method of lexical triggers to the cross-language problem, and by developing a likelihood-based adaptation scheme for combining a trigger model with an N-gram model. We describe the application of such language models for automatic speech recognition. By exploiting a side-corpus of contemporaneous English news articles for adapting a static Chinese language model to transcribe Mandarin news stories, we demonstrate significant reductions in both perplexity and recognition errors. We also compare our cross-lingual adaptation scheme to monolingual language model adaptation, and to an alternate method for exploiting cross-lingual cues, via cross-lingual information retrieval and machine translation, proposed elsewhere.


Computer Speech & Language | 2004

Contemporaneous text as side-information in statistical language modeling

Sanjeev Khudanpur; Woosung Kim

We propose new methods to exploit contemporaneous text, such as on-line news articles, to improve language models for automatic speech recognition and other natural language processing applications. In particular, we investigate the use of text from a resource-rich language to sharpen language models for processing a news story or article in a language with scarce linguistic resources. We demonstrate that even with fairly crude cross-language information retrieval and simple machine translation, one can construct story-specific Chinese language models which exploit cues from a side-corpus of English newswire to significantly improve the performance of language models estimated from a static Chinese corpus. Our investigations cover cases when the amount of available Chinese text is small, and a case when a large Chinese text corpus is available. We examine the effectiveness of our techniques both when the side-corpus contains English documents that are near-translations of the Chinese documents being processed, and when the English side-corpus is merely from contemporaneous and independent news sources. We present experimental results for automatic transcription of speech from the Mandarin Broadcast News corpus.


Archive | 2005

Language model adaptation for automatic speech recognition and statistical machine translation

Sanjeev Khudanpur; Woosung Kim


conference of the international speech communication association | 2002

Using cross-language cues for story-specific language modeling.

Sanjeev Khudanpur; Woosung Kim


Archive | 2003

The Johns Hopkins University 2003 Chinese-English Machine Translation System

William Byrne; Sanjeev Khudanpur; Woosung Kim; Shankar Kumar; Pavel Pecina; P. Virga; P. Xu; D. Yarowsky


conference of the international speech communication association | 2003

Language Model Adaptation Using Cross-Lingual Information

Woosung Kim; Sanjeev Khudanpur


conference of the international speech communication association | 2001

Smoothing Issues in the Structured Language Model

Woosung Kim; Sanjeev Khudanpur; Jun Wu; N. Charles


conference of the international speech communication association | 1995

KT-STS: a speech translation system for hotel reservation and a continuous speech recognition system for speech translation.

Myoung-Wan Koo; Il-Hyun Sohn; Woosung Kim; Du-Seong Chang

Collaboration


Dive into the Woosung Kim's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jun Wu

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Pavel Pecina

Charles University in Prague

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge