Abstract:
In color constancy research, the white patch algorithm assumes that the brightest pixel within an image represents gray to estimate the illuminant color. However, due to bad dots and noise, the color of the brightest pixel in an image can deviate from gray, diminishing the performance of the white patch algorithm in illuminant estimation. To address this issue, we propose a non-learning-based color constancy method that leverages the extension of gray regions. Initially, gray pixels are identified based on the white patch assumption and taken as initial seed points. Subsequently, an illuminant-invariant seed point determination criterion is constructed in the logarithmic RGB color space, which guides the segmentation of gray regions at the initial seed points. Thereafter, a set of growth termination thresholds is used to segment the image, resulting in multiple gray regions. The illuminant estimate for each region is computed as a weighted sum of all pixel values within that region, effectively mitigating the adverse impacts of bad dots and noise that might arise when relying on a single pixel for illuminant estimation. Ultimately, the illuminant estimates from various gray regions are fused using a Nearest2 weighted fusion approach, yielding the final illuminant estimation result. Experimental results on three public color constancy datasets, namely ColorChecker, Cube+, and SimpleCube++, demonstrate that our proposed method achieves the lowest median angular error among existing non-learning-based color constancy algorithms, with an average reduction of 70% in median angular error compared to the original white patch algorithm. Furthermore, owing to its insensitivity to camera sensor parameters, our method also exhibits the lowest median and trimean angular errors in cross-dataset color constancy experiments when compared to learning-based algorithms.