Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Keisuke Sakaguchi is active.

Publication


Featured researches published by Keisuke Sakaguchi.


international joint conference on natural language processing | 2015

Ground Truth for Grammatical Error Correction Metrics

Courtney Napoles; Keisuke Sakaguchi; Matt Post; Joel R. Tetreault

How do we know which grammatical error correction (GEC) system is best? A number of metrics have been proposed over the years, each motivated by weaknesses of previous metrics; however, the metrics themselves have not been compared to an empirical gold standard grounded in human judgments. We conducted the first human evaluation of GEC system outputs, and show that the rankings produced by metrics such as MaxMatch and I-measure do not correlate well with this ground truth. As a step towards better metrics, we also propose GLEU, a simple variant of BLEU, modified to account for both the source and the reference, and show that it hews much more closely to human judgments.


workshop on statistical machine translation | 2014

Efficient Elicitation of Annotations for Human Evaluation of Machine Translation

Keisuke Sakaguchi; Matt Post; Benjamin Van Durme

A main output of the annual Workshop on Statistical Machine Translation (WMT) is a ranking of the systems that participated in its shared translation tasks, produced by aggregating pairwise sentencelevel comparisons collected from human judges. Over the past few years, there have been a number of tweaks to the aggregation formula in attempts to address issues arising from the inherent ambiguity and subjectivity of the task, as well as weaknesses in the proposed models and the manner of model selection. We continue this line of work by adapting the TrueSkill TM algorithm — an online approach for modeling the relative skills of players in ongoing competitions, such as Microsoft’s Xbox Live — to the human evaluation of machine translation output. Our experimental results show that TrueSkill outperforms other recently proposed models on accuracy, and also can significantly reduce the number of pairwise annotations that need to be collected by sampling non-uniformly from the space of system competitions.


empirical methods in natural language processing | 2016

Universal Decompositional Semantics on Universal Dependencies.

Aaron Steven White; Drew Reisinger; Keisuke Sakaguchi; Tim Vieira; Sheng Zhang; Rachel Rudinger; Kyle Rawlins; Benjamin Van Durme

We present a framework for augmenting data sets from the Universal Dependencies project with Universal Decompositional Semantics. Where the Universal Dependencies project aims to provide a syntactic annotation standard that can be used consistently across many languages as well as a collection of corpora that use that standard, our extension has similar aims for semantic annotation. We describe results from annotating the English Universal Dependencies treebank, dealing with word senses, semantic roles, and event properties.


north american chapter of the association for computational linguistics | 2015

Effective Feature Integration for Automated Short Answer Scoring

Keisuke Sakaguchi; Michael Heilman; Nitin Madnani

A major opportunity for NLP to have a realworld impact is in helping educators score student writing, particularly content-based writing (i.e., the task of automated short answer scoring). A major challenge in this enterprise is that scored responses to a particular question (i.e., labeled data) are valuable for modeling but limited in quantity. Additional information from the scoring guidelines for humans, such as exemplars for each score level and descriptions of key concepts, can also be used. Here, we explore methods for integrating scoring guidelines and labeled responses, and we find that stacked generalization (Wolpert, 1992) improves performance, especially for small training sets.


empirical methods in natural language processing | 2016

There's No Comparison: Reference-less Evaluation Metrics in Grammatical Error Correction.

Courtney Napoles; Keisuke Sakaguchi; Joel R. Tetreault

Current methods for automatically evaluating grammatical error correction (GEC) systems rely on gold-standard references. However, these methods suffer from penalizing grammatical edits that are correct but not in the gold standard. We show that reference-less grammaticality metrics correlate very strongly with human judgments and are competitive with the leading reference-based evaluation metrics. By interpolating both methods, we achieve state-of-the-art correlation with human judgments. Finally, we show that GEC metrics are much more reliable when they are calculated at the sentence level instead of the corpus level. We have set up a CodaLab site for benchmarking GEC output using a common dataset and different evaluation metrics.


meeting of the association for computational linguistics | 2016

Phrase Structure Annotation and Parsing for Learner English.

Ryo Nagata; Keisuke Sakaguchi

There has been almost no work on phrase structure annotation and parsing specially designed for learner English despite the fact that they are useful for representing the structural characteristics of learner English. To address this problem, in this paper, we first propose a phrase structure annotation scheme for learner English and annotate two different learner corpora using it. Second, we show their usefulness, reporting on (a) inter-annotator agreement rate, (b) characteristic CFG rules in the corpora, and (c) parsing performance on them. In addition, we explore methods to improve phrase structure parsing for learner English (achieving an F -measure of 0.878). Finally, we release the full annotation guidelines, the annotated data, and the improved parser model for learner English to the public.


meeting of the association for computational linguistics | 2017

Error-repair Dependency Parsing for Ungrammatical Texts.

Keisuke Sakaguchi; Matt Post; Benjamin Van Durme

We propose a new dependency parsing scheme which jointly parses a sentence and repairs grammatical errors by extending the non-directional transition-based formalism of Goldberg and Elhadad (2010) with three additional actions: SUBSTITUTE, DELETE, INSERT. Because these actions may cause an infinite loop in derivation, we also introduce simple constraints that ensure the parser termination. We evaluate our model with respect to dependency accuracy and grammaticality improvements for ungrammatical sentences, demonstrating the robustness and applicability of our scheme.


meeting of the association for computational linguistics | 2013

Discriminative Approach to Fill-in-the-Blank Quiz Generation for Language Learners

Keisuke Sakaguchi; Yuki Arase; Mamoru Komachi


conference on computational natural language learning | 2013

NAIST at 2013 CoNLL Grammatical Error Correction Shared Task

Ippei Yoshimoto; Tomoya Kose; Kensuke Mitsuzawa; Keisuke Sakaguchi; Tomoya Mizumoto; Yuta Hayashibe; Mamoru Komachi; Yuji Matsumoto


Transactions of the Association for Computational Linguistics | 2016

Reassessing the Goals of Grammatical Error Correction: Fluency Instead of Grammaticality

Keisuke Sakaguchi; Courtney Napoles; Matt Post; Joel R. Tetreault

Collaboration


Dive into the Keisuke Sakaguchi's collaboration.

Top Co-Authors

Avatar

Matt Post

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mamoru Komachi

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Yuji Matsumoto

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tomoya Mizumoto

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Yuta Hayashibe

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Ai Azuma

Nara Institute of Science and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge