高级检索
潘成伟, 张建国, 陈毅松, 汪国平. 利用一致性检验的多图像中前景物体的自动分割[J]. 计算机辅助设计与图形学学报, 2017, 29(6): 1028-1036.
引用本文: 潘成伟, 张建国, 陈毅松, 汪国平. 利用一致性检验的多图像中前景物体的自动分割[J]. 计算机辅助设计与图形学学报, 2017, 29(6): 1028-1036.
Pan Chengwei, Zhang Jianguo, Chen Yisong, Wang Guoping. Automatic Segmentation of Foreground Objects from Multiple Images Based on Consistency Analysis[J]. Journal of Computer-Aided Design & Computer Graphics, 2017, 29(6): 1028-1036.
Citation: Pan Chengwei, Zhang Jianguo, Chen Yisong, Wang Guoping. Automatic Segmentation of Foreground Objects from Multiple Images Based on Consistency Analysis[J]. Journal of Computer-Aided Design & Computer Graphics, 2017, 29(6): 1028-1036.

利用一致性检验的多图像中前景物体的自动分割

Automatic Segmentation of Foreground Objects from Multiple Images Based on Consistency Analysis

  • 摘要: 图像中前景物体的分割具有十分广泛的应用,传统的方法需要借助一定的人工交互来获得前景物体的初始区域,但对于多图像数据集,这种人工交互的方式是十分烦琐的.为此提出一种基于一致性检验的多图像自动前景物体分割的方法.首先借助一定的三维场景的先验知识在多幅图像之间进行视图转换;然后对图像中的每个像素进行一致性分析,得到初始的前、背景标记结果;最后基于这个初始的标记结果构建相应的能量方程,并进行迭代优化,最终得到前景物体的精确轮廓.实验结果表明,该方法能够准确地检测出图像中前景物体的位置以及提取其轮廓,并能够利用分割的结果进行准确的三维重建.

     

    Abstract: The segmentation of foreground objects in an image is useful in many applications. Traditional methods generally get the initial area of the foreground objects with the help of human interactions. For dataset with multiple images, the manual interaction is very tedious. To overcome this problem, an automatic method based on consistency analysis is proposed to segment foreground objects from multiple images. First, the conversion between different views is achieved with the help of the prior knowledge of the three dimensional scene. Then initial label-ling results of foreground and background are obtained by consistency analysis on each pixel. Finally, an energy equation is constructed based on the initial labelling results and is optimized iteratively and continuously to obtain the accurate contour of foreground objects. The experimental results show that the proposed method can extract foreground objects accurately and the segmentation results can be used for 3D reconstruction.

     

/

返回文章
返回