Proceedings of the Brazilian Symposium on Multimedia and the Web | 2021

Quality Enhancement of Highly Degraded Music Using Deep Learning-Based Prediction Models for Lost Frequencies

 
 
 
 

Abstract


Audio quality degradation can have many causes. For musical applications, this fragmentation may lead to highly unpleasant experiences. Restoration algorithms may be employed to reconstruct missing parts of the audio in a similar way as for image reconstruction --- in an approach called audio inpainting. Current state-of-the art methods for audio inpainting cover limited scenarios, with well-defined gap windows and little variety of musical genres. In this work, we propose a Deep-Learning-based (DL-based) method for audio inpainting accompanied by a dataset with random fragmentation conditions that approximate real impairment situations. The dataset was collected using tracks from different music genres to provide a good signal variability. Our best model improved the quality of all musical genres, obtaining an average of 12.9 dB of PSNR, although it worked better for musical genres in which acoustic instruments are predominant.

Volume None
Pages None
DOI 10.1145/3470482.3479635
Language English
Journal Proceedings of the Brazilian Symposium on Multimedia and the Web

Full Text