高级检索
解梦达, 孙鹏, 郎宇博. 白斑假设下基于灰色区域扩展的色彩恒常算法[J]. 计算机辅助设计与图形学学报.
引用本文: 解梦达, 孙鹏, 郎宇博. 白斑假设下基于灰色区域扩展的色彩恒常算法[J]. 计算机辅助设计与图形学学报.
Mengda Xie, Peng Sun, Yubo Lang. Gray Region Extension via White Patch Assumption for Color Constancy[J]. Journal of Computer-Aided Design & Computer Graphics.
Citation: Mengda Xie, Peng Sun, Yubo Lang. Gray Region Extension via White Patch Assumption for Color Constancy[J]. Journal of Computer-Aided Design & Computer Graphics.

白斑假设下基于灰色区域扩展的色彩恒常算法

Gray Region Extension via White Patch Assumption for Color Constancy

  • 摘要: 色彩恒常研究中, 白斑算法假设图像内最高亮度像素为灰色来估计光源色彩. 然而受坏点及噪声影响, 图像内最高亮度像素的色彩易偏离灰色, 导致白斑算法光源估计性能降低. 针对上述问题, 提出一种基于灰色区域扩展的非学习类色彩恒常算法. 首先, 依据白斑假设定位灰色像素并将其作为初始种子点; 其次, 基于对数RGB色彩空间构建光源无关的种子点判定指标, 用于指导初始种子点处的灰色区域分割; 随后, 采用一组生长终止阈值分割图像以生成多个灰色区域, 各区域的光源估计值由该区域内所有像素值加权计算, 从而弱化采用单一像素执行光源估计时可能受到的坏点及噪声影响. 最后, 来自多个灰色区域的光源估计值经Nearest2加权融合后作为最终光源估计结果. 在ColorChecker, Cube+和SimpleCube++共3类公开色彩恒常数据集中的实验结果表明, 所提算法的光源估计值在现有非学习类色彩恒常算法中取得了最低的中值角度误差, 且相较原始白斑算法的中值角度误差平均降低70%. 此外, 得益于对相机传感器参数的不敏感性, 所提算法在跨数据集色彩恒常实验中相较学习类算法还具有最低的中值及三均值角度误差.

     

    Abstract: In color constancy research, the white patch algorithm assumes that the brightest pixel within an image represents gray to estimate the illuminant color. However, due to bad dots and noise, the color of the brightest pixel in an image can deviate from gray, diminishing the performance of the white patch algorithm in illuminant estimation. To address this issue, we propose a non-learning-based color constancy method that leverages the extension of gray regions. Initially, gray pixels are identified based on the white patch assumption and taken as initial seed points. Subsequently, an illuminant-invariant seed point determination criterion is constructed in the logarithmic RGB color space, which guides the segmentation of gray regions at the initial seed points. Thereafter, a set of growth termination thresholds is used to segment the image, resulting in multiple gray regions. The illuminant estimate for each region is computed as a weighted sum of all pixel values within that region, effectively mitigating the adverse impacts of bad dots and noise that might arise when relying on a single pixel for illuminant estimation. Ultimately, the illuminant estimates from various gray regions are fused using a Nearest2 weighted fusion approach, yielding the final illuminant estimation result. Experimental results on three public color constancy datasets, namely ColorChecker, Cube+, and SimpleCube++, demonstrate that our proposed method achieves the lowest median angular error among existing non-learning-based color constancy algorithms, with an average reduction of 70% in median angular error compared to the original white patch algorithm. Furthermore, owing to its insensitivity to camera sensor parameters, our method also exhibits the lowest median and trimean angular errors in cross-dataset color constancy experiments when compared to learning-based algorithms.

     

/

返回文章
返回