Abstract:
When capturing distant targets, video sequence images are subject to atmospheric turbulence, resulting in distortion and blurring. To restore degraded images caused by atmospheric turbulence in video sequences, we propose an algorithm that combines lucky imaging with generative adversarial networks. The algorithm utilizes spatial lucky imaging to select fortunate regions from limited video sequence images, which are then stitched and sorted to eliminate geometric distortion induced by atmospheric turbulence. Additionally, the DeblurGAN-v2 model is introduced to further enhance image quality. Real turbulent degradation images captured by a high-speed camera are employed for experimentation using the proposed method. Comparative analyses are conducted with methods such as image resampling, grayscale transformation, Butterworth high-pass filtering, MPRNet model, and DeblurGAN model. Objective evaluation metrics are employed to assess the results of different algorithms. Experimental results indicate that the proposed method yields significant improvements in Brenner gradient function, Laplacian gradient function, SMD, entropy function, energy gradient function, PIQE, and Brisque indicators, showing enhancements of 194%, 58%, 84%, 7%, 55%, 74%, and 163%, respectively, compared to other methods. From a subjective perspective, the algorithm combining lucky imaging with generative adversarial networks significantly enhances the visual quality of images and effectively reduces the degree of blurring and geometric distortion.