Neural networks : the official journal of the International Neural Network Society | 2021

A dual-stream deep attractor network with multi-domain learning for speech dereverberation and separation

 
 

Abstract


Deep attractor networks (DANs) perform speech separation with discriminative embeddings and speaker attractors. Compared with methods based on the permutation invariant training (PIT), DANs define a deep embedding space and deliver a more elaborate representation on each time-frequency (T-F) bin. However, it has been observed that the DANs achieve limited improvement on the signal quality if directly deployed in a reverberant environment. Following the success of time-domain separation networks on the clean mixture speech, we propose a dual-stream DAN with multi-domain learning to efficiently perform both dereverberation and separation tasks under the condition of variable numbers of speakers. The speaker encoding stream (SES) of the dual-stream DAN is trained to model the speaker information in the embedding space defined with the Fourier transform kernels. The speech decoding stream (SDS) accepts speaker attractors from the SES and learns to estimate the early component of the sound in the time domain. Meanwhile, additional clustering losses are used to bridge the gap between the oracle and the estimated attractors. Experiments were conducted on the Spatialized Multi-Speaker Wall Street Journal (SMS-WSJ) dataset. After comparing with the anechoic and reverberant signals, the early component was chosen as the learning targets. The experimental results demonstrated that the dual-stream DAN achieved scale-invariant source-to-distortion ratio (SI-SDR) improvement of 9.8∕7.5 dB on the reverberant 2-/3-speaker evaluation set, exceeding the baseline DAN and convolutional time-domain audio separation network (Conv-TasNet) by 2.0∕0.7 dB and 1.0∕0.5 dB, respectively.

Volume 141
Pages \n 238-248\n
DOI 10.1016/j.neunet.2021.04.023
Language English
Journal Neural networks : the official journal of the International Neural Network Society

Full Text