Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tomoki Hayashi is active.

Publication


Featured researches published by Tomoki Hayashi.


international conference on acoustics, speech, and signal processing | 2015

Exploring multi-channel features for denoising-autoencoder-based speech enhancement

Shoko Araki; Tomoki Hayashi; Marc Delcroix; Masakiyo Fujimoto; Kazuya Takeda; Tomohiro Nakatani

This paper investigates a multi-channel denoising autoencoder (DAE)-based speech enhancement approach. In recent years, deep neural network (DNN)-based monaural speech enhancement and robust automatic speech recognition (ASR) approaches have attracted much attention due to their high performance. Although multi-channel speech enhancement usually outperforms single channel approaches, there has been little research on the use of multi-channel processing in the context of DAE. In this paper, we explore the use of several multi-channel features as DAE input to confirm whether multi-channel information can improve performance. Experimental results show that certain multi-channel features outperform both a monaural DAE and a conventional time-frequency-mask-based speech enhancement method.


IEEE Journal of Selected Topics in Signal Processing | 2017

Hybrid CTC/Attention Architecture for End-to-End Speech Recognition

Shinji Watanabe; Takaaki Hori; Suyoun Kim; John R. Hershey; Tomoki Hayashi

Conventional automatic speech recognition (ASR) based on a hidden Markov model (HMM)/deep neural network (DNN) is a very complicated system consisting of various modules such as acoustic, lexicon, and language models. It also requires linguistic resources, such as a pronunciation dictionary, tokenization, and phonetic context-dependency trees. On the other hand, end-to-end ASR has become a popular alternative to greatly simplify the model-building process of conventional ASR systems by representing complicated modules with a single deep network architecture, and by replacing the use of linguistic resources with a data-driven learning method. There are two major types of end-to-end architectures for ASR; attention-based methods use an attention mechanism to perform alignment between acoustic frames and recognized symbols, and connectionist temporal classification (CTC) uses Markov assumptions to efficiently solve sequential problems by dynamic programming. This paper proposes hybrid CTC/attention end-to-end ASR, which effectively utilizes the advantages of both architectures in training and decoding. During training, we employ the multiobjective learning framework to improve robustness and achieve fast convergence. During decoding, we perform joint decoding by combining both attention-based and CTC scores in a one-pass beam search algorithm to further eliminate irregular alignments. Experiments with English (WSJ and CHiME-4) tasks demonstrate the effectiveness of the proposed multiobjective learning over both the CTC and attention-based encoder–decoder baselines. Moreover, the proposed method is applied to two large-scale ASR benchmarks (spontaneous Japanese and Mandarin Chinese), and exhibits performance that is comparable to conventional DNN/HMM ASR systems based on the advantages of both multiobjective learning and joint decoding without linguistic resources.


IEEE Transactions on Audio, Speech, and Language Processing | 2017

Duration-Controlled LSTM for Polyphonic Sound Event Detection

Tomoki Hayashi; Shinji Watanabe; Tomoki Toda; Takaaki Hori; Jonathan Le Roux; Kazuya Takeda

This paper presents a new hybrid approach called duration-controlled long short-term memory (LSTM) for polyphonic sound event detection (SED). It builds upon a state-of-the-art SED method that performs frame-by-frame detection using a bidirectional LSTM recurrent neural network (BLSTM), and incorporates a duration-controlled modeling technique based on a hidden semi-Markov model. The proposed approach makes it possible to model the duration of each sound event precisely and to perform sequence-by-sequence detection without having to resort to thresholding, as in conventional frame-by-frame methods. Furthermore, to effectively reduce sound event insertion errors, which often occur under noisy conditions, we also introduce a binary-mask-based postprocessing that relies on a sound activity detection network to identify segments with any sound event activity, an approach inspired by the well-known benefits of voice activity detection in speech recognition systems. We conduct an experiment using the DCASE2016 task 2 dataset to compare our proposed method with typical conventional methods, such as nonnegative matrix factorization and standard BLSTM. Our proposed method outperforms the conventional methods both in an event-based evaluation, achieving a 75.3% F1 score and a 44.2% error rate, and in a segment-based evaluation, achieving an 81.1% F1 score, and a 32.9% error rate, outperforming the best results reported in the DCASE2016 task 2 Challenge.


international conference on acoustics, speech, and signal processing | 2017

BLSTM-HMM hybrid system combined with sound activity detection network for polyphonic Sound Event Detection

Tomoki Hayashi; Shinji Watanabe; Tomoki Toda; Takaaki Hori; Jonathan Le Roux; Kazuya Takeda

This paper presents a new hybrid approach for polyphonic Sound Event Detection (SED) which incorporates a temporal structure modeling technique based on a hidden Markov model (HMM) with a frame-by-frame detection method based on a bidirectional long short-term memory (BLSTM) recurrent neural network (RNN). The proposed BLSTM-HMM hybrid system makes it possible to model sound event-dependent temporal structures and also to perform sequence-by-sequence detection without having to resort to thresholding such as in the conventional frame-by-frame methods. Furthermore, to effectively reduce insertion errors of sound events, which often occurs under noisy conditions, we additionally implement a binary mask post-processing using a sound activity detection (SAD) network to identify segments with any sound event activity. We conduct an experiment using the DCASE 2016 task 2 dataset to compare our proposed method with typical conventional methods, such as non-negative matrix factorization (NMF) and a standard BLSTM-RNN. Our proposed method outperforms the conventional methods and achieves an F1-score 74.9 % (error rate of 44.7 %) on the event-based evaluation, and an F1-score of 80.5 % (error rate of 33.8 %) on the segment-based evaluation, most of which also outperforms the best reported result in the DCASE 2016 task 2 challenge.


Journal of the Acoustical Society of America | 2016

Convolutional bidirectional long short-term memory hidden Markov model hybrid system for polyphonic sound event detection

Tomoki Hayashi; Shinji Watanabe; Tomoki Toda; Takaaki Tori; Jonathan Le Roux; Kazuya Takeda

In this study, we propose a polyphonic sound event detection method based on a hybrid system of Convolutional Bidirectional Long Short-Term Memory Recurrent Neural Network and Hidden Markov Model (CBLSTM-HMM). Inspired by the state-of-the-art approach to integrating neural networks to HMM in speech recognition, the proposed method develops the hybrid system using CBLSTM to estimate the HMM state output probability, making it possible to model sequential data while handling its duration change. The proposed hybrid system is capable of detecting a segment of each sound event without post-processing, such as a smoothing process of detection results over multiple frames, usually required in the frame-wise detection methods. Moreover, we can easily apply it to a multi-label classification problem to achieve polyphonic sound event detection. We conduct experimental evaluations using the DCASE2016 task two dataset to compare the performance of the proposed method to that of the conventional methods, such as non-...


asia pacific signal and information processing association annual summit and conference | 2014

Noisy speech recognition using blind spatial subtraction array technique and deep bottleneck features

Norihide Kitaoka; Tomoki Hayashi; Kazuya Takeda

In this study, we investigate the effect of blind spatial subtraction arrays (BSSA) on speech recognition systems by comparing the performance of a method using Mel-Frequency Cepstral Coefficients (MFCCs) with a method using Deep Bottleneck Features (DBNF) based on Deep Neural Networks (DNN). Performance is evaluated under various conditions, including noisy, in-vehicle conditions. Although performance of the DBNF-based system was much more degraded by noise than the MFCC-based system, BSSA improved the performance of both methods greatly, especially when matched condition training of acoustic models was employed. These results show the effectiveness of BSSA for speech recognition.


conference of the international speech communication association | 2017

Speaker-Dependent WaveNet Vocoder.

Akira Tamamori; Tomoki Hayashi; Kazuhiro Kobayashi; Kazuya Takeda; Tomoki Toda


european signal processing conference | 2015

Daily activity recognition based on DNN using environmental sound and acceleration signals

Tomoki Hayashi; Masafumi Nishida; Norihide Kitaoka; Kazuya Takeda


conference of the international speech communication association | 2017

Statistical Voice Conversion with WaveNet-Based Waveform Generation.

Kazuhiro Kobayashi; Tomoki Hayashi; Akira Tamamori; Tomoki Toda


2017 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU) | 2017

An investigation of multi-speaker training for wavenet vocoder

Tomoki Hayashi; Akira Tamamori; Kazuhiro Kobayashi; Kazuya Takeda; Tomoki Toda

Collaboration


Dive into the Tomoki Hayashi's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Shinji Watanabe

Mitsubishi Electric Research Laboratories

View shared research outputs
Top Co-Authors

Avatar

Kazuhiro Kobayashi

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Takaaki Hori

Mitsubishi Electric Research Laboratories

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jonathan Le Roux

Mitsubishi Electric Research Laboratories

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge