Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Minwoo Jeong is active.

Publication


Featured researches published by Minwoo Jeong.


Computer Speech & Language | 2009

Data-driven user simulation for automated evaluation of spoken dialog systems

Sangkeun Jung; Cheongjae Lee; Kyungduk Kim; Minwoo Jeong; Gary Geunbae Lee

This paper proposes a novel integrated dialog simulation technique for evaluating spoken dialog systems. A data-driven user simulation technique for simulating user intention and utterance is introduced. A novel user intention modeling and generating method is proposed that uses a linear-chain conditional random field, and a two-phase data-driven domain-specific user utterance simulation method and a linguistic knowledge-based ASR channel simulation method are also presented. Evaluation metrics are introduced to measure the quality of user simulation at intention and utterance. Experiments using these techniques were carried out to evaluate the performance and behavior of dialog systems designed for car navigation dialogs and a building guide robot, and it turned out that our approach was easy to set up and showed similar tendencies to real human users.


international joint conference on natural language processing | 2015

New Transfer Learning Techniques for Disparate Label Sets

Young-Bum Kim; Karl Stratos; Ruhi Sarikaya; Minwoo Jeong

In natural language understanding (NLU), a user utterance can be labeled differently depending on the domain or application (e.g., weather vs. calendar). Standard domain adaptation techniques are not directly applicable to take advantage of the existing annotations because they assume that the label set is invariant. We propose a solution based on label embeddings induced from canonical correlation analysis (CCA) that reduces the problem to a standard domain adaptation task and allows use of a number of transfer learning techniques. We also introduce a new transfer learning technique based on pretraining of hidden-unit CRFs (HUCRFs). We perform extensive experiments on slot tagging on eight personal digital assistant domains and demonstrate that the proposed methods are superior to strong baselines.


north american chapter of the association for computational linguistics | 2015

Weakly Supervised Slot Tagging with Partially Labeled Sequences from Web Search Click Logs

Young-Bum Kim; Minwoo Jeong; Karl Stratos; Ruhi Sarikaya

In this paper, we apply a weakly-supervised learning approach for slot tagging using conditional random fields by exploiting web search click logs. We extend the constrained lattice training of T¨¨ om et al. (2013) to non-linear conditional random fields in which latent variables mediate between observations and labels. When combined with a novel initialization scheme that leverages unlabeled data, we show that our method gives significant improvement over strong supervised and weakly-supervised baselines.


spoken language technology workshop | 2016

An overview of end-to-end language understanding and dialog management for personal digital assistants

Ruhi Sarikaya; Paul A. Crook; Alex Marin; Minwoo Jeong; Jean-Philippe Robichaud; Asli Celikyilmaz; Young-Bum Kim; Alexandre Rochette; Omar Zia Khan; Xiaohu Liu; Daniel Boies; Tasos Anastasakos; Zhaleh Feizollahi; Nikhil Ramesh; Hisami Suzuki; Roman Holenstein; Elizabeth Krawczyk; Vasiliy Radostev

Spoken language understanding and dialog management have emerged as key technologies in interacting with personal digital assistants (PDAs). The coverage, complexity, and the scale of PDAs are much larger than previous conversational understanding systems. As such, new problems arise. In this paper, we provide an overview of the language understanding and dialog management capabilities of PDAs, focusing particularly on Cortana, Microsofts PDA. We explain the system architecture for language understanding and dialog management for our PDA, indicate how it differs with prior state-of-the-art systems, and describe key components. We also report a set of experiments detailing system performance on a variety of scenarios and tasks. We describe how the quality of user experiences are measured end-to-end and also discuss open issues.


Speech Communication | 2009

Multi-domain spoken language understanding with transfer learning

Minwoo Jeong; Gary Geunbae Lee

This paper addresses the problem of multi-domain spoken language understanding (SLU) where domain detection and domain-dependent semantic tagging problems are combined. We present a transfer learning approach to the multi-domain SLU problem in which multiple domain-specific data sources can be incorporated. To implement multi-domain SLU with transfer learning, we introduce a triangular-chain structured model. This model effectively learns multiple domains in parallel, and allows use of domain-independent patterns among domains to create a better model for the target domain. We demonstrate that the proposed method outperforms baseline models on dialog data for multi-domain SLU problems.


Computer Speech & Language | 2008

Practical use of non-local features for statistical spoken language understanding

Minwoo Jeong; Gary Geunbae Lee

Spoken language understanding (SLU) addresses the problem of mapping natural language speech to frame structure encoding of its meaning. The statistical sequential labeling method has been successfully applied to SLU tasks; however, most sequential labeling approaches lack long-distance dependency information handling method. In this paper, we exploit non-local features as an estimate of long-distance dependencies to improve performance of the statistical SLU problem. A method we propose is to use trigger pairs automatically extracted by a feature induction algorithm. We describe a light practical version of the feature inducer for which a simple modification is efficient and successful. We evaluate our method on three SLU tasks and show an improvement of performance over the baseline local model.


IEEE Transactions on Audio, Speech, and Language Processing | 2013

Unsupervised Spoken Language Understanding for a Multi-Domain Dialog System

Donghyeon Lee; Minwoo Jeong; Kyungduk Kim; Seonghan Ryu; Gary Geunbae Lee

This paper proposes an unsupervised spoken language understanding (SLU) framework for a multi-domain dialog system. Our unsupervised SLU framework applies a non-parametric Bayesian approach to dialog acts, intents and slot entities, which are the components of a semantic frame. The proposed approach reduces the human effort necessary to obtain a semantically annotated corpus for dialog system development. In this study, we analyze clustering results using various evaluation metrics for four dialog corpora. We also introduce a multi-domain dialog system that uses the unsupervised SLU framework. We argue that our unsupervised approach can help overcome the annotation acquisition bottleneck in developing dialog systems. To verify this claim, we report a dialog system evaluation, in which our method achieves competitive results in comparison with a system that uses a manually annotated corpus. In addition, we conducted several experiments to explore the effect of our approach on reducing development costs. The results show that our approach be helpful for the rapid development of a prototype system and reducing the overall development costs.


international conference on acoustics, speech, and signal processing | 2006

A Situation-Based Dialogue Management using Dialogue Examples

Cheongjae Lee; Sangkeun Jung; Jihyun Eun; Minwoo Jeong; Gary Geunbae Lee

In this paper, we present POSTECH Situation-Based Dialogue Manager (POSSDM) for a spoken dialogue system using both example- and rule-based dialogue management techniques for effective generation of appropriate system responses. A spoken dialogue system should generate cooperative responses to smoothly control dialogue flow with the users. We introduce a new dialogue management technique incorporating dialogue examples and situation-based rules for the electronic program guide (EPG) domain. For the system response generation, we automatically construct and index a dialogue example database from the dialogue corpus, and the proper system response is determined by retrieving the best dialogue example for the current dialogue situation, which includes a current user utterance, dialogue act, semantic frame and discourse history. When the dialogue corpus is not enough to cover the domain, we also apply manually constructed situation-based rules mainly for meta-level dialogue management. Experiments show that our example-based dialogue modeling is very useful and effective in domain-oriented dialogue processing


spoken language technology workshop | 2006

CHAT AND GOAL-ORIENTED DIALOG TOGETHER: A UNIFIED EXAMPLE-BASED ARCHITECTURE FOR MULTI-DOMAIN DIALOG MANAGEMENT

Cheongjae Lee; Sangkeun Jung; Minwoo Jeong; Gary Geunbae Lee

This paper discusses development of a multi-domain conversational dialog system for simultaneously managing chats and goal-oriented dialogs. In this paper, we present a UMDM (unified multi-domain dialog manager) using a novel example-based dialog management technique. We have developed an effective utterance classifier with linguistic, semantic, and keyword features for domain switching and an example-based dialog modeling technique for domain-portable dialog models. Our experiments show that our approach is very useful and effective in multi-domain dialog system.


north american chapter of the association for computational linguistics | 2016

Task Completion Platform: A self-serve multi-domain goal oriented dialogue platform

Paul A. Crook; Alex Marin; Vipul Agarwal; Khushboo Aggarwal; Tasos Anastasakos; Ravi Bikkula; Daniel Boies; Asli Celikyilmaz; Senthilkumar Chandramohan; Zhaleh Feizollahi; Roman Holenstein; Minwoo Jeong; Omar Zia Khan; Young-Bum Kim; Elizabeth Krawczyk; Xiaohu Liu; Danko Panic; Vasiliy Radostev; Nikhil Ramesh; Jean-Philippe Robichaud; Alexandre Rochette; Logan Stromberg; Ruhi Sarikaya

We demonstrate the Task Completion Platform (TCP); a multi-domain dialogue platform that can host and execute large numbers of goal-orientated dialogue tasks. The platform features a task configuration language, TaskForm, that allows the definition of each individual task to be decoupled from the overarching dialogue policy used by the platform to complete those tasks. This separation allows for simple and rapid authoring of new tasks, while dialogue policy and platform functionality evolve independent of the tasks. The current platform includes machine learnt models that provide contextual slot carry-over, flexible item selection, and task selection/switching. Any new task immediately gains the benefit of these pieces of built-in platform functionality. The platform is used to power many of the multi-turn dialogues supported by the Cortana personal assistant.

Collaboration


Dive into the Minwoo Jeong's collaboration.

Top Co-Authors

Avatar

Gary Geunbae Lee

Pohang University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Seokhwan Kim

Pohang University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Sangkeun Jung

Pohang University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Young-Bum Kim

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Cheongjae Lee

Pohang University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Donghyeon Lee

Pohang University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Kyungduk Kim

Pohang University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Jihyun Eun

Pohang University of Science and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge