2021 IEEE Spoken Language Technology Workshop (SLT) | 2021

Supervised Attention for Speaker Recognition

 
 
 

Abstract


The recently proposed self-attentive pooling (SAP) has shown good performance in several speaker recognition systems. In SAP systems, the context vector is trained end-to-end together with the feature extractor, where the role of context vector is to select the most discriminative frames for speaker recognition. However, the SAP underperforms compared to the temporal average pooling (TAP) baseline in some settings, which implies that the attention is not learnt effectively in end-to-end training. To tackle this problem, we introduce strategies for training the attention mechanism in a supervised manner, which learns the context vector using classified samples. With our proposed methods, context vector can be boosted to select the most informative frames. We show that our method outperforms existing methods in various experimental settings including short utterance speaker recognition, and achieves competitive performance over the existing baselines on the VoxCeleb datasets.

Volume None
Pages 286-293
DOI 10.1109/SLT48900.2021.9383579
Language English
Journal 2021 IEEE Spoken Language Technology Workshop (SLT)

Full Text