2020 25th International Conference on Pattern Recognition (ICPR) | 2021

A Neural Lip-Sync Framework for Synthesizing Photorealistic Virtual News Anchors

 
 
 
 

Abstract


Lip sync has emerged as a promising technique for generating mouth movements from audio signals. However, synthesizing a high-resolution and photorealistic virtual news anchor is still challenging. Lack of natural appearance, visual consistency, and processing efficiency are the main problems with existing methods. In this paper, we present a novel lip-sync framework specially designed for producing high fidelity virtual news anchors. A pair of Temporal Convolutional Networks are used to learn the cross-modal sequential mapping from audio signals to mouth movements, followed by a neural rendering network that translates the synthetic facial map into high-resolution and photorealistic appearance. This fully-trainable framework provides an end-to-end processing that outperforms traditional graphics-based methods in many low-delay applications. Experiments also show the framework has advantages over modern neural-based methods in both visual appearance and efficiency.

Volume None
Pages 5286-5293
DOI 10.1109/ICPR48806.2021.9412187
Language English
Journal 2020 25th International Conference on Pattern Recognition (ICPR)

Full Text