Two Stages End-to-End Generative Network for Single Image Defogging
-
Graphical Abstract
-
Abstract
Single image defogging is a fundamental problem in computer vision.Among the existing method,they are mainly divided into two categories,including prior-based methods and learning-based methods.In practice,however,the prior-based method may fail due to their strong assumed constraint information,and the learning methods are hard to train due to extremely difficult to obtain the paired training data.To avoid those problems,this paper has proposed an end-to-end learning framework to remove the fog from a single foggy image by using the unpaired fog-fogfree dataset,adversarial discriminators and cycle consistent loss function.Our method is based on the framework of cycle generative adversarial network(CycleGAN).Different from one-stage mapping strategy in CycleGAN,we use a two-stage mapping strategy in each module to strength the mapping function to recover a cleaner image.In order to preserve the texture information,we have introduced the prior knowledge to constrain the generators.The synthetic and real-world foggy images are used as our test dataset.On these images,we exploit the full-reference and no-reference image quality assessment methods to compare each defogging methods.Experimental results demonstrate that the proposed method can better deal with much more kinds of foggy scenes,and the generated results have better peak signal to noise ratio and structural similarity than traditional methods.Moreover,our results have more vivid color information and detailed edge texture information.
-
-