高级检索

面向多类物体草图的三维形状生成统一模型

Unified Multi-Category 3D Object Generation from Sketches

  • 摘要: 现有基于草图的三维建模深度神经网络普遍需要针对特定类别单独训练,泛化性差;而多类同时训练容易产生类别混淆及三维模型细节缺失.基于无监督的草图到模型生成框架,提出适用于多类物体统一训练的三维模型生成网络MC-SketchModNet.首先在生成过程中,通过草图类别编码融合草图特征共同输入生成网络,建立草图表达的三维模型与物体类别之间的关联;然后在训练阶段增加监督视角,提供更多草图生成三维模型的约束,消除生成模型的类别歧义性.在ShapeNet的合成草图数据集和手绘草图数据集上的实验结果表明,MC-SketchModNet能够有效地消除生成三维模型的类别混淆现象,获得质量更高、细节更丰富的三维模型,在体素交并比指标上,比多类训练的基线方法SoftRas提升4.91%;该网络中引入的类别编码能够支持生成模型目标类别的交互调控,在草图轮廓语义模糊时进行类别引导的模型生成.

     

    Abstract: Existing sketch-based 3D modeling deep neural networks are typically trained separately for each object category, resulting in poor generality. However, jointly training the modeling networks for multiple categories leads to category confusion and deficiency of shape details. Based on an unsupervised sketch-based 3D model generation framework, a 3D model generation network, MC-SketchModNet, is proposed for joint training of multi-category objects without losing shape details. First, by introducing a sketch category embedding branch, the sketch features and the object category prior are effectively fused to synthesize high-quality 3D models in the generation procedure. Secondly, more views are added to supervise the generation network training as they provide more constraints for 3D shapes from coarse sketches in the training process, thus eliminating the category ambiguity. The experimental results on both synthetic sketches from ShapeNet and freehand sketches demonstrate that compared with the baseline method multi-class trained SoftRas, the proposed MC-SketchModNet can effectively eliminate the category ambiguity in the generated models and obtain 3D models with higher quality and richer details, having an increase of 4.91% in voxel IoU. Moreover, by introducing the object category embedding, interactive category control is supported to specify particular object categories when the input sketch is ambiguous in semantics.

     

/

返回文章
返回