Multi-Frame Video Enhancement Using Virtual Frame Synthesized in Time Domain
-
Graphical Abstract
-
Abstract
The convolutional neural network based video enhancement can effectively reduces compression artifacts,improving both the video coding efficiency and subjective quality.State-of-the-art methods usually adopt the single-frame enhancement strategies.However,video frames are also highly correlated in temporal domain,indicating that the reconstructed frames in temporal domain can also provide useful information to enhance the quality of current frame.To sufficiently utilize the temporal information,this paper proposes a spatial-temporal video enhancement method by introducing a virtual frame in time domain.We first employ an adaptive network to predict the virtual frame of current frame from its neighboring reconstructed frames.This virtual frame carries abundant temporal information.On the other hand,the current frame is also highly correlated in spatial domain.Hence we can combine spatial-temporal information together for extensive enhancement.To this end,we develop an enhancing network,which is structured in a progressive fusion manner,to combine both the virtual frame and the current frame for further frame fusion.Experimental results show that under random access configuration,the proposed method can obtain an average gain of 0.38 dB and 0.06 dB PSNR compared to the anchor H.265/HEVC and the single-frame-based strategy.Moreover,it outperforms the state-of-the-art multi-frame quality enhancement network(MFQE)0.26 dB PSNR,whereas the number of parameters is only 12.2%of MFQE.The proposed method also significantly improves the subjective quality of the compressed videos.
-
-