Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Akinori Ito is active.

Publication


Featured researches published by Akinori Ito.


international conference on spoken language processing | 1996

Language modeling by string pattern N-gram for Japanese speech recognition

Akinori Ito; Masaki Kohda

This paper describes a new powerful statistical language model based on N-gram model for Japanese speech recognition. In English, a sentence is written word-by-word. On the other hand. A sentence in Japanese has no word boundary character. Therefore. A Japanese sentence requires word segmentation by morphemic analysis before the construction of word N-gram. We propose an N-gram based language model which requires no word segmentation. This model uses character string patterns as units of N-gram. The string patterns are chosen from the training text according to a statistical criterion. We carried out several experiments to compare perplexities of the proposed and the conventional models. which showed the advantage of our model. For many of the readers interest, we applied this method to English text. As the result of a preliminary experiment, the proposed method got better performance than conventional word trigram.


Systems and Computers in Japan | 2002

Construction and evaluation of language models based on stochastic context-free grammar for speech recognition

Chiori Hori; Masaharu Katoh; Akinori Ito; Masaki Kohda

This paper deals with the use of a stochastic context-free grammar (SCFG) for large vocabulary continuous speech recognition; in particular, an SCFG with phrase-level dependency rules is built. Unlike n-gram models, the SCFG can describe not only local constraints but also global constraints pertaining to the sentence as a whole, thus making possible language models with great expressive power. However, the inside-outside algorithm must be used for estimation of the SCFG parameters, which involves a great amount of calculation, proportional to the third power of the number of nonterminal symbols and of the input string length. Hence, due to problems in dealing with extensive text corpora, the SCFG has hardly been applied as a language model for very large vocabulary continuous speech recognition. The proposed phrase-level dependency SCFG allows a significant reduction of the computational load. In experiments with the EDR corpus, the proposed method proved effective. In experiments with the Mainichi corpus, a large-scale phrase-level dependency SCFG was built for a very large vocabulary continuous speech recognition system. Speech recognition tests with a vocabulary of about 5000 words showed that the proposed method could not compare with the trigram model in performance; however, when it was used in combination with a trigram model, the error rate was reduced by 14% compared to the trigram model alone.


conference of the international speech communication association | 2000

Free software toolkit for Japanese large vocabulary continuous speech recognition

Tatsuya Kawahara; Akinobu Lee; Tetsunori Kobayashi; Kazuya Takeda; Nobuaki Minematsu; Shigeki Sagayama; Katsunobu Itou; Akinori Ito; Mikio Yamamoto; Atsushi Yamada; Takehito Utsuro; Kiyohiro Shikano


The Journal of The Acoustical Society of Japan (e) | 1999

Japanese Dictation Toolkit-1997 version-

Tatsuya Kawahara; Akinobu Lee; Tetsunori Kobayashi; Kazuya Takeda; Nobuaki Minematsu; Katsunobu Itou; Akinori Ito; Mikio Yamamoto; Atsushi Yamada; Takehito Utsuro; Kiyohiro Shikano


language resources and evaluation | 2002

Continuous Speech Recognition Consortium: an Open Repository for CSR Tools and Models

Akinobu Lee; Tatsuya Kawahara; Kazuya Takeda; Masato Mimura; Atsushi Yamada; Akinori Ito; Katsunobu Itou; Kiyohiro Shikano


conference of the international speech communication association | 1999

A new metric for stochastic language model evaluation.

Akinori Ito; Masaki Kohda; Mari Ostendorf


Systems and Computers in Japan | 2001

Erratum: Language modeling by stochastic dependency grammer for Japanese speech recognition

Akinori Ito; Chiori Hori; Masaharu Katoh; Masaki Kohda


Archive | 1998

Common Platform of Japanese Large Vocabulary Continuous Speech Recognizer Assessment -- Proposal and Initial Results --

Tatsuya Kawahara; Akinobu Lee; Tetsunori Kobayashi; Kazuya Takeda; Nobuaki Minematsu; Katsunobu Itou; Akinori Ito; Mikio Yamamoto; Atsushi Yamada; Takehito Utsuro; Kiyohiro Shikano


Transactions of Information Processing Society of Japan | 2002

A Metric Based on Likelihood Difference for n-gram Language Model Evaluation

Akinori Ito; Masaki Kohda


日本音響学会研究発表会講演論文集 | 2001

Performance improvement of LVCSR using vocal tract length normalization

Daisuke Fujita; Masaharu Katoh; Akinori Ito; Masaki Kohda

Collaboration


Dive into the Akinori Ito's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Akinobu Lee

Nagoya Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kiyohiro Shikano

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge