Journal of Ambient Intelligence and Humanized Computing | 2021

Music singing video teaching and timbre automation analysis based on dynamic programming and surveillance video summary algorithm

 

Abstract


Music is an indispensable artistic way to cultivate people s sentiments in people s lives. Recognizing the timbre of music can be done in an intelligent way through recognition technology. In musical instrument recognition, the timbre of the musical instrument needs to be distinguished. Under the conditions of modern technology, dynamic programming algorithms are used to design the data information in the system. If the time efficiency of the algorithm is not enough, the time efficiency of the dynamic programming algorithm is limited. Will produce optimization. The solution of the problem of dynamic programming algorithm requires the design of different models for experimental analysis. In this paper, the time–frequency function, cepstrum function, sparse attribute and probability are analyzed experimentally, which can confirm that the time–frequency domain of the timbre in the instrument can correctly identify the instrument. The frequency spectrum of deep learning time–frequency information can extract high-frequency situations and use it to identify musical instruments. The input of CNN can have a significant effect on abstract features, and the auditory spectrogram masters the time–frequency information of music. Under the same network structure, the accuracy of MFCC and spectrogram has been significantly improved. Because CNN has a weight distribution feature, the increase in the input size of the convolutional network will not significantly increase the training time, thereby greatly reducing the complexity of training.

Volume None
Pages 1-13
DOI 10.1007/S12652-021-03213-W
Language English
Journal Journal of Ambient Intelligence and Humanized Computing

Full Text