2021 International Joint Conference on Neural Networks (IJCNN) | 2021

Audio-Visual Speech Separation with Visual Features Enhanced by Adversarial Training

 
 
 
 
 
 

Abstract


Audio-visual speech separation (AVSS) refers to separating individual voice from an audio mixture of multiple simultaneous talkers by conditioning on visual features. For the AVSS task, visual features play an important role, based on which we manage to extract more effective visual features to improve the performance. In this paper, we propose a novel AVSS model that uses speech-related visual features for isolating the target speaker. Specifically, the method of extracting speech-related visual features has two steps. Firstly, we extract the visual features that contain speech-related information by learning joint audio-visual representation. Secondly, we use the adversarial training method to enhance speech-related information in visual features further. We adopt the time-domain approach and build audio-visual speech separation networks with temporal convolutional neural networks block. Experiments on four audio-visual datasets, including GRID, TCD-TIMIT, AVSpeech, and LRS2, show that our model significantly outperforms previous state-of-the-art AVSS models. We also demonstrate that our model can achieve excellent speech separation performance in noisy realworld scenarios. Moreover, in order to alleviate the performance degradation of AVSS models caused by the missing of some video frames, we propose a training strategy, which makes our model robust when video frames are partially missing. The demo, code, and supplementary materials can be available at https://github.com/aispeech-lab/advr-avss.

Volume None
Pages 1-8
DOI 10.1109/IJCNN52387.2021.9533660
Language English
Journal 2021 International Joint Conference on Neural Networks (IJCNN)

Full Text