Advanced Search
Pu Zedong, Ma Wei, Mi Qing. Efficient Spatio-Temporal Feature Extraction Recurrent Neural Network for Video Deblurring[J]. Journal of Computer-Aided Design & Computer Graphics, 2023, 35(11): 1720-1730. DOI: 10.3724/SP.J.1089.2023.19685
Citation: Pu Zedong, Ma Wei, Mi Qing. Efficient Spatio-Temporal Feature Extraction Recurrent Neural Network for Video Deblurring[J]. Journal of Computer-Aided Design & Computer Graphics, 2023, 35(11): 1720-1730. DOI: 10.3724/SP.J.1089.2023.19685

Efficient Spatio-Temporal Feature Extraction Recurrent Neural Network for Video Deblurring

  • Considering that existing recurrent neural network-based video deblurring methods are limited in cross-frame feature aggregation and computational efficiency, an efficient spatio-temporal feature extraction recurrent neural network is proposed. Firstly, we combine a residual dense module with the channel attention mechanism to efficiently extract discriminative features from each frame of a given sequence. Then, a spatio-temporal feature enhancement and fusion module is proposed to select features from the highly redundant and interfering sequential features and integrate them into the features of the current frame. Finally, the enhanced features of the current frame are converted into the deblurred image by a reconstruction module. The quantitative and qualitative experimental results on three public datasets, containing both synthetic and real blurred videos, show that the proposed network can achieve excellent video deblurring effect with less computational cost. Among them, on the GOPRO dataset, the PSNR reaches 31.43 dB and the SSIM reaches 0.920 1.
  • loading

Catalog

    Turn off MathJax
    Article Contents

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return