Self-Calibrated Knowledge Distillation for Low-Light Image Enhancement
-
Graphical Abstract
-
Abstract
Low-light image enhancement is an important research area in computer vision. To reduce model parameters significantly while effectively suppressing noise and enhancing image details, this paper proposes a Self-Calibration Knowledge Distillation method for low-light image enhancement——SCKD. First, heterogeneous knowledge distillation is combined with Retinex theory to propose the Retinex-KD framework, which normalizes the teacher network's distillation conditions and the student model's enhancement steps, guiding the teacher model to transfer brightness, color, and texture details to the student model, thereby improving enhancement details. Second, a low-light enhancement student network called LDFC-Net is proposed, which includes a Light-Guided Calibration (LGC) module and a Light Interference Suppression Calibration (LISC) module. The LGC module recovers low-light image illumination and enhances details to obtain an illumination estimation feature map, while the LISC module suppresses noise in the illumination estimation feature map, resulting in more realistic images. Finally, a specialized distillation loss function is designed and combined with Retinex-KD to distill LDFC-Net, effectively reducing the student model's parameter count while ensuring enhancement performance. Experimental results on the LOL-v1 and LOL-v2-real datasets show that, compared to mainstream lightweight methods, SCKD improves PSNR and SSIM metrics by an average of 1.862 dB and 4.15% respectively, while requiring only 50 K parameters and 2.98 G computations to achieve performance comparable to non-lightweight mainstream methods, significantly improving efficiency.
-
-