Comput. Speech Lang. | 2021

Identification of related languages from spoken data: Moving from off-line to on-line scenario

 
 
 
 
 

Abstract


Abstract The accelerating flow of information we encounter around the world today makes many companies deploy speech recognition systems that, to an ever-growing extent, process data on-line rather than off-line. These systems, e.g., for real-time 24/7 broadcast transcription, often work with input-stream data containing utterances in more than one language. This multilingual data can correctly be transcribed in real-time only if the language used is identified with just a small latency for each input frame. For this purpose, a novel approach to on-line spoken language identification is proposed in this work. Its development is documented within a series of consecutive experiments starting in the off-line mode for 11 Slavic languages, going through artificially prepared multilingual data for the on-line scenario, and ending with real bilingual TV programs containing utterances in mutually similar Czech and Slovak. The resulting scheme that we propose operates frame-by-frame; it takes in a multilingual stream of speech frames and outputs a stream of the corresponding language labels. It utilizes a weighted finite-state transducer as a decoder, which smooths the output from a language classifier fed by multilingual and augmented bottleneck features. An essential factor from the accuracy point of view is that these features, as well as the classifier itself, are based on deep neural network architectures that allow the modeling of long-term time dependencies. The obtained results show that our scheme allows us to determine the language spoken in real-world bilingual TV shows with an average latency of around 2.5 seconds and with an increase in word error rate by a mere 2.9% over the reference 18.1% value yielded by using manually prepared language labels.

Volume 68
Pages 101180
DOI 10.1016/j.csl.2020.101180
Language English
Journal Comput. Speech Lang.

Full Text