Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yutaka Ashikari is active.

Publication


Featured researches published by Yutaka Ashikari.


Tsinghua Science & Technology | 2008

NICT/ATR Chinese-Japanese-English Speech-to-Speech Translation System

Tohru Shimizu; Yutaka Ashikari; Eiichiro Sumita; Jin-Song Zhang; Satoshi Nakamura

Abstract This paper describes the latest version of the Chinese-Japanese-English handheld speech-to-speech translation system developed by NICT/ATR, which is now ready to be deployed for travelers. With the entire speech-to-speech translation function being implemented into one terminal, it realizes real-time, location-free speech-to-speech translation. A new noise-suppression technique notably improves the speech recognition performance. Corpus-based approaches of speech recognition, machine translation, and speech synthesis enable coverage of a wide variety of topics and portability to other languages. Test results show that the character accuracy of speech recognition is 82%-94% for Chinese speech, with a bilingual evaluation understudy score of machine translation is 0.55–0.74 for Chinese-Japanese and Chinese-English.


2009 Oriental COCOSDA International Conference on Speech Database and Assessments | 2009

Toward translating Indonesian spoken utterances to/from other languages

Sakriani Sakti; Michael J. Paul; Ranniery Maia; Shinsuke Sakai; Noriyuki Kimura; Yutaka Ashikari; Eiichiro Sumita; Satoshi Nakamura

This paper outlines the National Institute of Information and Communications Technology / Advanced Telecommunications Research Institute International (NICT/ATR) research activities in developing a spoken language translation system, specially for translating Indonesian spoken utterances into/from Japanese or English. Since the NICT/ATR Japanese-English speech translation system is an established one and has been widely known for many years, our focus here is only on the additional components that are related to the Indonesian spoken language technology. This includes the development of an Indonesian large vocabulary continuous speech recognizer, Indonesian-Japanese and Indonesian-English machine translators, and an Indonesian speech synthesizer. Each of these component technologies was developed by using corpus-based speech and language processing approaches. Currently, all these components have been successfully incorporated into the mobile terminal of the NICT/ATR multilingual speech translation system.


ACM Transactions on Speech and Language Processing | 2012

Distributed speech translation technologies for multiparty multilingual communication

Sakriani Sakti; Michael Paul; Andrew M. Finch; Xinhui Hu; Jinfu Ni; Noriyuki Kimura; Shigeki Matsuda; Chiori Hori; Yutaka Ashikari; Hisashi Kawai; Hideki Kashioka; Eiichiro Sumita; Satoshi Nakamura

Developing a multilingual speech translation system requires efforts in constructing automatic speech recognition (ASR), machine translation (MT), and text-to-speech synthesis (TTS) components for all possible source and target languages. If the numerous ASR, MT, and TTS systems for different language pairs developed independently in different parts of the world could be connected, multilingual speech translation systems for a multitude of language pairs could be achieved. Yet, there is currently no common, flexible framework that can provide an entire speech translation process by bringing together heterogeneous speech translation components. In this article we therefore propose a distributed architecture framework for multilingual speech translation in which all speech translation components are provided on distributed servers and cooperate over a network. This framework can facilitate the connection of different components and functions. To show the overall mechanism, we first present our state-of-the-art technologies for multilingual ASR, MT, and TTS components, and then describe how to combine those systems into the proposed network-based framework. The client applications are implemented on a handheld mobile terminal device, and all data exchanges among client users and spoken language technology servers are managed through a Web protocol. To support multiparty communication, an additional communication server is provided for simultaneously distributing the speech translation results from one user to multiple users. Field testing shows that the system is capable of realizing multiparty multilingual speech translation for real-time and location-independent communication.


mobile data management | 2006

Developing Client-Server Speech Translation Platform

Tohru Shimizu; Yutaka Ashikari; Toshiyuki Takezawa; Masahide Mizushima; Genichiro Kikui; Yutaka Sasaki; Satoshi Nakamura

This paper describes a client-server speech translation platform designed for use at mobile terminals. Because terminals and servers are connected via a 3G public mobile phone networks, speech translation services are available at various places with thin client. This platform realizes hands-free communication and robustness for real use of speech translation in noisy environments. A microphone array and new noise suppression technique improves speech recognition performance, and a corpus-based approach enables wide coverage, robustness and portability to new languages and domains. The experimental result for evaluating the communicability of speakers of different languages shows that task completion rates using the speech translation system of 85% and 75% are achieved for Japanese- English and Japanese-Chinese, respectively. The system also has the ability to convey approximately one item of information per 2 utterances (one turn) on average for both Japanese-English and Japanese-Chinese in a task-oriented dialogue.


Archive | 2010

Speech translation system, first terminal apparatus, speech recognition server, translation server, and speech synthesis server

Satoshi Nakamura; Eiichiro Sumita; Yutaka Ashikari; Noriyuki Kimura; Chiori Hori


IWSLT | 2006

Development of client-server speech translation system on a multi-lingual speech communication platform.

Tohru Shimizu; Yutaka Ashikari; Eiichiro Sumita; Hideki Kashioka; Satoshi Nakamura


Archive | 2010

Speech translation system, first terminal device, speech recognition server device, translation server device, and speech synthesis server device

Satoshi Nakamura; Eiichiro Sumita; Yutaka Ashikari; Noriyuki Kimura; Chiori Hori


Archive | 2010

Speech translation system, dictionary server device, and program

Satoshi Nakamura; Eiichiro Sumita; Yutaka Ashikari; Noriyuki Kimura; Chiori Hori


conference of the international speech communication association | 2005

Spoken dialog system and its evaluation of geographic information system for elderly persons' mobility support.

Takatoshi Jitsuhiro; Shigeki Matsuda; Yutaka Ashikari; Satoshi Nakamura; Ikuko Eguchi Yairi; Seiji Igi


Archive | 2010

Speech translation system, control apparatus and control method

Satoshi Nakamura; Eiichiro Sumita; Yutaka Ashikari; Noriyuki Kimura; Chiori Hori

Collaboration


Dive into the Yutaka Ashikari's collaboration.

Top Co-Authors

Avatar

Satoshi Nakamura

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Noriyuki Kimura

National Institute of Information and Communications Technology

View shared research outputs
Top Co-Authors

Avatar

Chiori Hori

National Institute of Information and Communications Technology

View shared research outputs
Top Co-Authors

Avatar

Shigeki Matsuda

National Institute of Information and Communications Technology

View shared research outputs
Top Co-Authors

Avatar

Sakriani Sakti

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Tohru Shimizu

National Institute of Information and Communications Technology

View shared research outputs
Top Co-Authors

Avatar

Hideki Kashioka

National Institute of Information and Communications Technology

View shared research outputs
Top Co-Authors

Avatar

Hisashi Kawai

National Institute of Information and Communications Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge