Yutaka Ashikari
National Institute of Information and Communications Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Yutaka Ashikari.
Tsinghua Science & Technology | 2008
Tohru Shimizu; Yutaka Ashikari; Eiichiro Sumita; Jin-Song Zhang; Satoshi Nakamura
Abstract This paper describes the latest version of the Chinese-Japanese-English handheld speech-to-speech translation system developed by NICT/ATR, which is now ready to be deployed for travelers. With the entire speech-to-speech translation function being implemented into one terminal, it realizes real-time, location-free speech-to-speech translation. A new noise-suppression technique notably improves the speech recognition performance. Corpus-based approaches of speech recognition, machine translation, and speech synthesis enable coverage of a wide variety of topics and portability to other languages. Test results show that the character accuracy of speech recognition is 82%-94% for Chinese speech, with a bilingual evaluation understudy score of machine translation is 0.55–0.74 for Chinese-Japanese and Chinese-English.
2009 Oriental COCOSDA International Conference on Speech Database and Assessments | 2009
Sakriani Sakti; Michael J. Paul; Ranniery Maia; Shinsuke Sakai; Noriyuki Kimura; Yutaka Ashikari; Eiichiro Sumita; Satoshi Nakamura
This paper outlines the National Institute of Information and Communications Technology / Advanced Telecommunications Research Institute International (NICT/ATR) research activities in developing a spoken language translation system, specially for translating Indonesian spoken utterances into/from Japanese or English. Since the NICT/ATR Japanese-English speech translation system is an established one and has been widely known for many years, our focus here is only on the additional components that are related to the Indonesian spoken language technology. This includes the development of an Indonesian large vocabulary continuous speech recognizer, Indonesian-Japanese and Indonesian-English machine translators, and an Indonesian speech synthesizer. Each of these component technologies was developed by using corpus-based speech and language processing approaches. Currently, all these components have been successfully incorporated into the mobile terminal of the NICT/ATR multilingual speech translation system.
ACM Transactions on Speech and Language Processing | 2012
Sakriani Sakti; Michael Paul; Andrew M. Finch; Xinhui Hu; Jinfu Ni; Noriyuki Kimura; Shigeki Matsuda; Chiori Hori; Yutaka Ashikari; Hisashi Kawai; Hideki Kashioka; Eiichiro Sumita; Satoshi Nakamura
Developing a multilingual speech translation system requires efforts in constructing automatic speech recognition (ASR), machine translation (MT), and text-to-speech synthesis (TTS) components for all possible source and target languages. If the numerous ASR, MT, and TTS systems for different language pairs developed independently in different parts of the world could be connected, multilingual speech translation systems for a multitude of language pairs could be achieved. Yet, there is currently no common, flexible framework that can provide an entire speech translation process by bringing together heterogeneous speech translation components. In this article we therefore propose a distributed architecture framework for multilingual speech translation in which all speech translation components are provided on distributed servers and cooperate over a network. This framework can facilitate the connection of different components and functions. To show the overall mechanism, we first present our state-of-the-art technologies for multilingual ASR, MT, and TTS components, and then describe how to combine those systems into the proposed network-based framework. The client applications are implemented on a handheld mobile terminal device, and all data exchanges among client users and spoken language technology servers are managed through a Web protocol. To support multiparty communication, an additional communication server is provided for simultaneously distributing the speech translation results from one user to multiple users. Field testing shows that the system is capable of realizing multiparty multilingual speech translation for real-time and location-independent communication.
mobile data management | 2006
Tohru Shimizu; Yutaka Ashikari; Toshiyuki Takezawa; Masahide Mizushima; Genichiro Kikui; Yutaka Sasaki; Satoshi Nakamura
This paper describes a client-server speech translation platform designed for use at mobile terminals. Because terminals and servers are connected via a 3G public mobile phone networks, speech translation services are available at various places with thin client. This platform realizes hands-free communication and robustness for real use of speech translation in noisy environments. A microphone array and new noise suppression technique improves speech recognition performance, and a corpus-based approach enables wide coverage, robustness and portability to new languages and domains. The experimental result for evaluating the communicability of speakers of different languages shows that task completion rates using the speech translation system of 85% and 75% are achieved for Japanese- English and Japanese-Chinese, respectively. The system also has the ability to convey approximately one item of information per 2 utterances (one turn) on average for both Japanese-English and Japanese-Chinese in a task-oriented dialogue.
Archive | 2010
Satoshi Nakamura; Eiichiro Sumita; Yutaka Ashikari; Noriyuki Kimura; Chiori Hori
IWSLT | 2006
Tohru Shimizu; Yutaka Ashikari; Eiichiro Sumita; Hideki Kashioka; Satoshi Nakamura
Archive | 2010
Satoshi Nakamura; Eiichiro Sumita; Yutaka Ashikari; Noriyuki Kimura; Chiori Hori
Archive | 2010
Satoshi Nakamura; Eiichiro Sumita; Yutaka Ashikari; Noriyuki Kimura; Chiori Hori
conference of the international speech communication association | 2005
Takatoshi Jitsuhiro; Shigeki Matsuda; Yutaka Ashikari; Satoshi Nakamura; Ikuko Eguchi Yairi; Seiji Igi
Archive | 2010
Satoshi Nakamura; Eiichiro Sumita; Yutaka Ashikari; Noriyuki Kimura; Chiori Hori
Collaboration
Dive into the Yutaka Ashikari's collaboration.
National Institute of Information and Communications Technology
View shared research outputsNational Institute of Information and Communications Technology
View shared research outputsNational Institute of Information and Communications Technology
View shared research outputsNational Institute of Information and Communications Technology
View shared research outputsNational Institute of Information and Communications Technology
View shared research outputsNational Institute of Information and Communications Technology
View shared research outputs