2020 25th International Conference on Pattern Recognition (ICPR) | 2021

Residual Learning of Video Frame Interpolation Using Convolutional LSTM

 
 

Abstract


Video frame interpolation aims to generate intermediate frames between the original frames. This produces videos with a higher frame r ate and creates smoother motion. Many video frame interpolation methods first estimate the motion vector between the input frames and then synthesizes the intermediate frame based on the motion. However, these methods rely on the accuracy of the motion estimation step and fail to accurately generate the interpolated frame when the estimated motion vectors are inaccurate. Therefore, to avoid the uncertainties caused by motion estimation, this paper proposes a method that directly generates the intermediate frame. Since two consecutive frames are relatively similar, our method takes the average of these two frames and utilizes residual learning to learn the difference between the average of these frames and the ground truth middle frame. In addition, our method uses Convolutional LSTMs and four input frames to better incorporate spatiotemporal information. This neural network can be easily trained end to end without difficult to obtain data such as optical flow. Our experimental results show that the proposed method can perform favorably against other state-of-the-art frame interpolation methods.

Volume None
Pages 1499-1504
DOI 10.1109/ICPR48806.2021.9412470
Language English
Journal 2020 25th International Conference on Pattern Recognition (ICPR)

Full Text