高级检索

基于幸运成像和生成对抗网络的大气湍流图像复原方法

吕品, 邓东平, 石铁柱, 王梦迪, 刘潜, 田雨, 张紫红, 曾赟, 邬国锋

吕品, 邓东平, 石铁柱, 王梦迪, 刘潜, 田雨, 张紫红, 曾赟, 邬国锋. 基于幸运成像和生成对抗网络的大气湍流图像复原方法[J]. 计算机辅助设计与图形学学报, 2025, 37(1): 157-166. DOI: 10.3724/SP.J.1089.2023-00035
引用本文: 吕品, 邓东平, 石铁柱, 王梦迪, 刘潜, 田雨, 张紫红, 曾赟, 邬国锋. 基于幸运成像和生成对抗网络的大气湍流图像复原方法[J]. 计算机辅助设计与图形学学报, 2025, 37(1): 157-166. DOI: 10.3724/SP.J.1089.2023-00035
Lyu Pin, Deng Dongping, Shi Tiezhu, Wang Mengdi, Liu Qian, Tian Yu, Zhang Zihong, Zeng Yun, Wu Guofeng. Atmospheric Turbulence Image Restoration Based on Lucky Imaging and Generative Adversarial Networks[J]. Journal of Computer-Aided Design & Computer Graphics, 2025, 37(1): 157-166. DOI: 10.3724/SP.J.1089.2023-00035
Citation: Lyu Pin, Deng Dongping, Shi Tiezhu, Wang Mengdi, Liu Qian, Tian Yu, Zhang Zihong, Zeng Yun, Wu Guofeng. Atmospheric Turbulence Image Restoration Based on Lucky Imaging and Generative Adversarial Networks[J]. Journal of Computer-Aided Design & Computer Graphics, 2025, 37(1): 157-166. DOI: 10.3724/SP.J.1089.2023-00035

基于幸运成像和生成对抗网络的大气湍流图像复原方法

基金项目: 

深圳市科技计划(ZDSYS20210623101800001)

广东省科技创新战略专项资金(粤港澳联合实验室)(2020B1212030009).

详细信息
    作者简介:

    吕品(1998—),男,硕士研究生,主要研究方向为计算机视觉、数字图像处理;邓东平(1999—),男,硕士研究生,主要研究方向为语义分割、数字图像处理;石铁柱(1987—),男,博士,副教授,硕士生导师,论文通信作者,主要研究方向为智能遥感图像处理;王梦迪(1998—),女,硕士研究生,主要研究方向为城市生态遥感;刘潜(1998—),男,硕士研究生,主要研究方向为土壤遥感;田雨(1999—),男,硕士研究生,主要研究方向为光偏振特性研究;张紫红(2000—),女,硕士研究生,主要研究方向为遥感定量反演;曾赟(2001—),男,硕士研究生,主要研究方向为基于机器学习的地理加权回归方法;邬国锋(1969—),男,博士,教授,博士生导师,主要研究方向为遥感定量反演和遥感图像处理.

  • 中图分类号: TP391.41

Atmospheric Turbulence Image Restoration Based on Lucky Imaging and Generative Adversarial Networks

  • 摘要: 在拍摄远距离目标时,视频序列图像受到大气湍流的影响从而产生畸变和模糊,为对视频序列大气湍流退化图像进行复原,提出了一种幸运成像与生成对抗网络相结合的方法.采用空域幸运成像方法,在有限的视频序列图像中挑选出幸运区域,将其拼接-排序后进行叠加,从而消除大气湍流带来的几何畸变;在此基础上引入DeblurGAN-v2模型,进一步提升图像质量.将高速相机拍摄的真实湍流退化图像作为研究对象,采用所提方法进行实验,并与图像重采样、灰度变换、巴特沃斯高通滤波、MPRNet模型和DeblurGAN模型等方法进行对比,并通过客观评价指标对不同方法的结果进行评估.实验结果表明,所提方法的Brenner梯度函数、Laplacian梯度函数、灰度差分函数(SMD)、熵函数(Entropy)、能量梯度函数(Energy)、PIQE以及Brisque指标相较于其他方法分别提升了194%,58%,84%,7%,55%,74%和163%.从主观效果上看,幸运成像与生成对抗网络相结合的方法能显著地提高图像的视觉质量,并有效地降低图像的模糊和几何畸变程度.
    Abstract: When capturing distant targets, video sequence images are subject to atmospheric turbulence, resulting in distortion and blurring. To restore degraded images caused by atmospheric turbulence in video sequences, we propose an algorithm that combines lucky imaging with generative adversarial networks. The algorithm utilizes spatial lucky imaging to select fortunate regions from limited video sequence images, which are then stitched and sorted to eliminate geometric distortion induced by atmospheric turbulence. Additionally, the DeblurGAN-v2 model is introduced to further enhance image quality. Real turbulent degradation images captured by a high-speed camera are employed for experimentation using the proposed method. Comparative analyses are conducted with methods such as image resampling, grayscale transformation, Butterworth high-pass filtering, MPRNet model, and DeblurGAN model. Objective evaluation metrics are employed to assess the results of different algorithms. Experimental results indicate that the proposed method yields significant improvements in Brenner gradient function, Laplacian gradient function, SMD, entropy function, energy gradient function, PIQE, and Brisque indicators, showing enhancements of 194%, 58%, 84%, 7%, 55%, 74%, and 163%, respectively, compared to other methods. From a subjective perspective, the algorithm combining lucky imaging with generative adversarial networks significantly enhances the visual quality of images and effectively reduces the degree of blurring and geometric distortion.
  • [1]

    Zhu X, Milanfar P. Stabilizing and deblurring atmospheric turbulence[C] //Proceedings of the IEEE International Conference on Computational Photography. Los Alamitos: IEEE Computer Society Press, 2011: 1-8

    [2]

    Hardy J W, Thompson L. Adaptive optics for astronomical telescopes[J]. Physics Today, 2000, 53(4): Article No.69

    [3] Xiang E, Lu Xiaomeng, Mao Yongna, et al. Application of lucky imaging technology in astronomical observation[J]. Progress in Astronomy, 2015, 33(3): 363-375(in Chinese) (向娥, 卢晓猛, 毛永娜, 等. 幸运成像技术在天文观测中的应用[J]. 天文学进展, 2015, 33(3): 363-375)
    [4] Jiang Wenhan. Adaptive optical technology[J]. Chinese Journal of Nature, 2006, 28(1): 7-13(in Chinese) (姜文汉. 自适应光学技术[J]. 自然杂志, 2006, 28(1): 7-13)
    [5] Zhao Panzi. Research on lucky imaging technology based on FPGA[D]. Kunming: Kunming University of Science and Technology, 2018(in Chinese) (赵盼孜. 基于FPGA的幸运成像技术的研究[D]. 昆明: 昆明理工大学, 2018)
    [6]

    Labeyrie A. Attainment of diffraction limited resolution in large telescopes by Fourier analysing speckle patterns in star images[J]. Astronomy and Astrophysics, 1970, 6: Article No.85

    [7]

    Fried D L. Probability of getting a lucky short-exposure image through turbulence[J]. Journal of the Optical Society of America, 1978, 68(12): 1651-1658

    [8] Bao Jianghong. The experimental study on lucky imaging technology[D]. Changsha: National University of Defense Technology, 2008(in Chinese) (鲍江宏. 幸运成像技术的实验研究[D]. 长沙: 国防科学技术大学, 2008)
    [9] Wang Jinliang, Li Binhua. A real-time lucky imaging algorithm with fixed number of selected images[J]. Astronomical Research & Technology, 2021, 18(2): 231-239(in Chinese) (王锦良, 李彬华. 一种固定选图数的实时幸运成像算法[J]. 天文研究与技术, 2021, 18(2): 231-239)
    [10] Mao Longhua, Li Binhua, Zhang Xiliang, et al. Experimental investigation of lucky imaging algorithm based on 2m astronomical telescope[J]. Optical Technique, 2018, 44(5): 542-548(in Chinese) (毛栊哗, 李彬华, 张西亮, 等. 基于2m级大口径望远镜的幸 运成 像算 法的 实验 研究 [J]. 光学 技术, 2018, 44(5): 542-548)
    [11] Chen Xu. Research on pose normalization face recognition based on generative adversarial network[D]. Chongqing: Chongqing University of Posts and Telecommunications, 2021(in Chinese) (陈旭. 基于生成对抗网络的姿态归一化人脸识别方法研究[D]. 重庆: 重庆邮电大学, 2021)
    [12]

    Krizhevsky A, Sutskever I, Hinton G E. ImageNet classification with deep convolutional neural networks[C] //Proceedings of the 25th International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2012: 1097-1105

    [13] Wu Jin, Qian Xuezhong. Compact deep convolutional neural network in image recognition[J]. Journal of Frontiers of Computer Science and Technology, 2019, 13(2): 275-284(in Chinese) (吴进, 钱雪忠. 紧凑型深度卷积神经网络在图像识别中的应用[J]. 计算机科学与探索, 2019, 13(2): 275-284)
    [14] Gao Jie. Research on image inpainting algorithm based on generative adversarial network[D]. Nanjing: Nanjing University of Posts and Telecommunications, 2021(in Chinese) (高杰. 基于生成对抗网络的图像修复算法研究[D]. 南京: 南京邮电大学, 2021)
    [15] Yang Manting. Research on image generation algorithm based on GAN[D]. Huainan: Anhui University of Science & Technology, 2021(in Chinese) (杨曼婷. 基于GAN的图像生成算法研究[D]. 淮南: 安徽理工大学, 2021)
    [16]

    Ledig C, Theis L, Huszár F, et al. Photo-realistic single image super-resolution using a generative adversarial network[C] //Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Los Alamitos: IEEE Computer Society Press, 2017: 105-114

    [17]

    Lau C P, Souri H, Chellappa R. ATFaceGAN: single face image restoration and recognition from atmospheric turbulence[C] //Proceedings of the 15th IEEE International Conference on Automatic Face and Gesture Recognition. Los Alamitos: IEEE Computer Society Press, 2020: 32-39

    [18] Zhen Cheng, Yang Yongsheng, Li Yuanxiang, et al. Atmospheric turbulence image restoration based on multi-scale generative adversarial network[J]. Computer Engineering, 2021, 47(11): 227-233(in Chinese) (甄诚, 杨永胜, 李元祥, 等. 基于多尺度生成对抗网络的大气湍流图像复原[J]. 计算机工程, 2021, 47(11): 227-233)
    [19] Gao Xin, Tang Jia, Hu Haojun, et al. Development of lucky imaging technology and reflections[J]. Journal of Spacecraft TT&C Technology, 2011, 30(5): 29-32(in Chinese) (高昕, 唐嘉, 胡浩军, 等. 幸运成像技术的发展现状及启示[J]. 飞行器测控学报, 2011, 30(5): 29-32)
    [20] Yang Zhongliang, Liang Yonghui, Hu Haojun, et al. Theoretical and experimental research of lucky imaging technique about extended objects[J]. Laser & Optoelectronics Progress, 2010, 47(5): Article No.051004(in Chinese) (杨忠良, 梁永辉, 胡浩军, 等. 扩展目标幸运成像技术的理论和实验研究[J]. 激光与光电子学进展, 2010, 47(5): Article No.051004)
    [21]

    Gao W S, Zhang X G, Yang L, et al. An improved Sobel edge detection[C] //Proceedings of the 3rd International Conference on Computer Science and Information Technology. Los Alamitos: IEEE Computer Society Press, 2010: 67-71

    [22]

    Goodfellow I, Pouget-Abadie J, Mirza M, et al. Generative adversarial networks[J]. Communications of the ACM, 2020, 63(11): 139-144

    [23]

    Kupyn O, Budzan V, Mykhailych M, et al. DeblurGAN: blind motion deblurring using conditional adversarial networks[C] //Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Los Alamitos: IEEE Computer Society Press, 2018: 8183-8192

    [24]

    Nah S, Kim T H, Lee K M. Deep multi-scale convolutional neural network for dynamic scene deblurring[C] //Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Los Alamitos: IEEE Computer Society Press, 2017: 257-265

    [25]

    Kupyn O, Martyniuk T, Wu J R, et al. DeblurGAN-v2: deblurring (orders-of-magnitude) faster and better[C] //Proceedings of the IEEE/CVF International Conference on Computer Vision. Los Alamitos: IEEE Computer Society Press, 2019: 8877-8886

    [26]

    Tao X, Gao H Y, Shen X Y, et al. Scale-recurrent network for deep image deblurring[C] //Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Los Alamitos: IEEE Computer Society Press, 2018: 8174-8182

    [27]

    Su S C, Delbracio M, Wang J, et al. Deep video deblurring for hand-held cameras[C] //Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Los Alamitos: IEEE Computer Society Press, 2017: 237-246

    [28]

    Galoogahi H K, Fagg A, Huang C, et al. Need for speed: a benchmark for higher frame rate object tracking[C] //Proceedings of the IEEE International Conference on Computer Vision. Los Alamitos: IEEE Computer Society Press, 2017: 1134-1143

    [29] Li Xue, Jiang Minshan. A comparison of sharpness functions based on microscopes[J]. Optical Instruments, 2018, 40(1): 28-38(in Chinese) (李雪, 江旻珊. 光学显微成像系统图像清晰度评价函数的对比[J]. 光学仪器, 2018, 40(1): 28-38)
    [30]

    Yao Y, Abidi B, Doggaz N, et al. Evaluation of sharpness measures and search algorithms for the auto focusing of high-magnification images[C] //Proceedings of the SPIE 6246, Visual Information Processing XV. Bellingham: Society of Photo-Optical Instrumentation Engineers, 2006: Article No.62460G

    [31]

    Wu F B, Han J. Study on defect imaging technology of optical elements based on micro-Raman spectroscopy[J]. Review of Scientific Instruments, 2023, 94(6): Article No.065112

    [32]

    Tsai D Y, Lee Y, Matsuyama E. Information entropy measure for evaluation of image quality[J]. Journal of Digital Imaging, 2008, 21(3): 338-347

    [33]

    Yu L F, Primak A N, Liu X, et al. Image quality optimization and evaluation of linearly mixed images in dual-source, dual-energy CT[J]. Medical Physics, 2009, 36(3): 1019-1024

    [34]

    Venkatanath N, Praneeth D, Bh M C, et al. Blind image quality evaluation using perception based features[C] //Proceedings of the 21st National Conference on Communications. Los Alamitos: IEEE Computer Society Press, 2015: 1-6

    [35]

    Mittal A, Moorthy A K, Bovik A C. No-reference image quality assessment in the spatial domain[J]. IEEE Transactions on Image Processing, 2012, 21(12): 4695-4708

    [36]

    Zamir S W, Arora A, Khan S, et al. Multi-stage progressive image restoration[C] //Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Los Alamitos: IEEE Computer Society Press, 2021: 14816-14826

计量
  • 文章访问数:  191
  • HTML全文浏览量:  23
  • PDF下载量:  94
  • 被引次数: 0
出版历程
  • 收稿日期:  2023-04-19
  • 修回日期:  2023-12-03

目录

    /

    返回文章
    返回