Abstract:
To address the limitations of existing retrieval algorithms in accurately integrating the geometric and se-mantic features of CAD models expressed through model-based definition (MBD), resulting in inferior re-trieval accuracy, this paper proposes a retrieval algorithm based on multi-dimensional views. First, at the semantic level, engineering semantic features are mapped to texture based on state bits to construct surface manufacturing attribute views, breaking through the limitations of traditional shape descriptors that rely solely on geometric features. At the geometric level, surface depth views and wall thickness views are combined to jointly express geometric features, with three types of views fused into multi-dimensional views to represent the composite features of MBD models. Then, A view attention mechanism driven by engineering semantic rules is developed to achieve pose-consistent projection, improves the geometric in-variance of the multi-dimensional views. Finally, a multi-level, multi-granularity similarity evaluation strategy is constructed based on ResNet and Transformer networks, and dual-branch training is employed to achieve feature learning from category discrimination to instance identification, enhancing cross-dimensional feature representation capabilities under different retrieval intents. Experimental results on pure-geometry CAD and MBD datasets show that the proposed algorithm significantly outperforms comparison methods across evaluation metrics such as recall and precision, retrieves MBD models that better match user intent, and provides strong support for part serialization and standardized design.