Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Katsunori Kotani is active.

Publication


Featured researches published by Katsunori Kotani.


International Symposium on Emerging Technologies for Education | 2016

Learner Feature Variation in Measuring the Listenability for Learners of English as a Foreign Language

Katsunori Kotani; Takehiko Yoshimi

Previous research on the ease of listening comprehension (henceforth, listenability) has measured listenability on the basis of sentence properties such as the length of words/sentences and speech rate. Recent research has included features of listeners, which are required for the measurement of listenability for English learners because their listening proficiencies vary greatly from the beginner to the advanced level. Given the importance of listening proficiency as a listener feature, this study developed listenability measurement methods based on the costs of compiling listener features: expensive features extracted from test scores and inexpensive features extracted from learners’ experiences. The experimental results showed that inexpensive features made substantial contributions to the measurement of middle-range listenability.


international conference on the computer processing of oriental languages | 2009

Validity of an Automatic Evaluation of Machine Translation Using a Word-Alignment-Based Classifier

Katsunori Kotani; Takehiko Yoshimi; Takeshi Kutsumi; Ichiko Sata

Because human evaluation of machine translation is extensive but expensive, we often use automatic evaluation in developing a machine translation system. From viewpoint of evaluation cost, there are two types of evaluation methods: one uses (multiple) reference translation, e.g., METEOR, and the other classifies machine translation either into machine-like or human-like translation based on translation properties, i.e., a classification-based method. Previous studies showed that classification-based methods could perform evaluation properly. These studies constructed a classifier by learning linguistic properties of translation such as length of a sentence, syntactic complexity, and literal translation, and their classifiers marked high classification accuracy. These previous studies, however, have not examined whether their classification accuracy could present translation quality. Hence, we investigated whether classification accuracy depends on translation quality. The experiment results showed that our method could correctly distinguish the degrees of translation quality.


hellenic conference on artificial intelligence | 2010

A machine learning-based evaluation method for machine translation

Katsunori Kotani; Takehiko Yoshimi

Constructing a classifier that distinguishes machine translations from human translations is a promising approach to automatic evaluation of machine-translated sentences Using this approach, we constructed a classifier using Support Vector Machines based on word-alignment distributions between source sentences and human or machine translations This paper investigates the validity of the classification-based method by comparing it with well-known evaluation methods The experimental results show that our classification-based method can accurately evaluate fluency of machine translations.


International Conference of the Pacific Association for Computational Linguistics | 2017

Listenability Measurement Based on Learners' Transcription Performance.

Katsunori Kotani; Takehiko Yoshimi

Language teachers using listening materials on the Internet need to examine the ease of the listening materials (hereafter, listenability) they choose in order to maintain learners’ motivation for listening practice since the listenability of such materials is not controlled, unlike commercially available teaching materials. This study proposes to use a listenability index based on learners’ transcription performance. Transcription performance was determined using the normalized edit distance (hereafter, NED) from a learner’s transcription to a reference sentence. We examined the reliability and validity of NED as a dependent variable for listenability measurement using multiple regression analysis in an experiment comprising 50 learners of English as a foreign language. The results supported the reliability and validity of NED.


meeting of the association for computational linguistics | 2015

Application of a Corpus to Identify Gaps between English Learners and Native Speakers

Katsunori Kotani; Takehiko Yoshimi

In order to develop effective computerassisted language teaching systems for learners of English as a foreign language, it is first necessary to identify gaps between learners and native speakers in the four basic linguistic skills (reading, writing, pronunciation, and listening). To identify these gaps, the accuracy and fluency in language use between learners and native speakers should be compared using a learner corpus. However, previous corpora have not included all necessary types of linguistic data. Therefore, in this study, we aimed to design and build a new corpus comprising all types of linguistic data necessary for comparing accuracy and fluency in basic linguistic skills between learners and native speakers.


International Conference of the Pacific Association for Computational Linguistics | 2015

Measuring Readability for Learners of English as a Foreign Language by Linguistic and Learner Features

Katsunori Kotani; Takehiko Yoshimi

The Internet serves as a source of authentic reading material, enabling learners to practice English in real contexts when learning English as a foreign language. An adaptive computer-assisted language learning and teaching system can assist in obtaining authentic materials such as news articles from the Internet. However, to match material level to a learner’s reading proficiency, the system must be equipped with a method to measure proficiency-based readability. Therefore, we developed a method for doing so. With our method, readability is measured through regression analysis using both learner and linguistic features as independent variables. Learner features account for learner reading proficiency, and linguistic features explain lexical, syntactic, and semantic difficulties of sentences. A cross validation test showed that readability measured with our method exhibited higher correlation (r = 0.60) than readability measured only with linguistic features (r = 0.46). A comparison of our method with the method without learner features showed a statistically significant difference. These results suggest the effectiveness of combined learner and linguistic features for measuring reading proficiency-based readability.


pacific asia conference on language information and computation | 2014

A Listenability Measuring Method for an Adaptive Computer-assisted Language Learningand Teaching System

Katsunori Kotani; Shota Ueda; Takehiko Yoshimi; Hiroaki Nanjo


US-China education review | 2010

A prediction model of foreign language reading proficiency based on reading time and text complexity

Katsunori Kotani; Takehiko Yoshimi; Hitoshi Isahara


ICERI2012 Proceedings | 2012

APPLICABILITY OF READABILITY FORMULAE TO THE MEASUREMENT OF SENTENCE-LEVEL READABILITY

Katsunori Kotani; Takehiko Yoshimi; H. Nanjo; Hitoshi Isahara


international joint conference on natural language processing | 2011

Compiling Learner Corpus Data of Linguistic Output and Language Processing in Speaking, Listening, Writing, and Reading

Katsunori Kotani; Takehiko Yoshimi; Hiroaki Nanjo; Hitoshi Isahara

Collaboration


Dive into the Katsunori Kotani's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hitoshi Isahara

National Institute of Information and Communications Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ichiko Sata

National Archives and Records Administration

View shared research outputs
Top Co-Authors

Avatar

Takeshi Kutsumi

National Archives and Records Administration

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Nobutoshi Hatanaka

Tokyo University of Information Sciences

View shared research outputs
Top Co-Authors

Avatar

Shin Oshima

Kansai Gaidai University

View shared research outputs
Top Co-Authors

Avatar

Toshiyuki Kanamaru

National Institute of Information and Communications Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge