ArXiv | 2021

AVLnet: Learning Audio-Visual Language Representations from Instructional Videos

 
 
 
 
 
 
 
 
 
 
 

Abstract


Current methods for learning visually grounded language from videos often rely on time-consuming and expensive data collection, such as human annotated textual summaries or machine generated automatic speech recognition transcripts. In this work, we introduce Audio-Video Language Network (AVLnet), a self-supervised network that learns a shared audio-visual embedding space directly from raw video inputs. We circumvent the need for annotation and instead learn audio-visual language representations directly from randomly segmented video clips and their raw audio waveforms. We train AVLnet on publicly available instructional videos and evaluate our model on video clip and language retrieval tasks on three video datasets. Our proposed model outperforms several state-of-the-art text-video baselines by up to 11.8% in a video clip retrieval task, despite operating on the raw audio instead of manually annotated text captions. Further, we show AVLnet is capable of integrating textual information, increasing its modularity and improving performance by up to 20.3% on the video clip retrieval task. Finally, we perform analysis of AVLnet s learned representations, showing our model has learned to relate visual objects with salient words and natural sounds.

Volume abs/2006.09199
Pages None
DOI 10.21437/interspeech.2021-1312
Language English
Journal ArXiv

Full Text