高级检索

面向知识图谱的位置敏感嵌入模型

Location-Sensitive Embedding for Knowledge Graph Embedding

  • 摘要: 知识图谱嵌入将离散的符号化实体和关系映射到连续的低维空间中,使后续的任务(如推理、图谱补全等)可以通过代数运算的方式进行.大体而言,知识图谱嵌入可以分成2类主流方法:平移距离模型和语义匹配模型.从知识图谱的图结构出发,可以分析平移距离模型的学习瓶颈,进而得出结论,即平移距离模型的缺点是没有区别对待实体在头部和尾部2个位置所应有的不同语义.据此,提出位置敏感的嵌入表示(location-sensitive embedding,LSE)模型.与现有模型不同的是,LSE仅对头部实体进行由不同关系决定的映射,并且将关系建模为一般化的线性变换而不仅是平移.理论分析的结果保证了LSE的表示能力,同时揭示了与其他模型之间的联系.同时,为了提高模型的训练和推理效率,提出简化模型LSEd,将线性变换限制为对角阵.在4个大规模通用知识图谱数据集上使用链接预测任务进行测试.实验结果表明,所提出的模型达到了最高或与当前最先进模型持平的性能.

     

    Abstract: Knowledge graph embedding maps the discrete symbolic entities and relations into continuous low-dimensional space,enabling the downstream task(e.g.,inference,knowledge graph completion and etc.)to be carried on in an algebraic manner.Roughly speaking,there are two main-streams towards knowledge graph embedding,translational distance models and semantic matching models.The learning bottleneck of the former caused by special graph structures is analyzed.And it is found out that translational distance models are to be blamed for treating head and tail entities from a same viewpoint.So,a location-sensitive embedding(LSE)model is proposed.Unlike previous models,it just transforms the head entity with relational-specific mapping.And it models relation as a general linear transformation instead of a translational operator.Its repre-sentation capacity and the relationship to current models are theoretically analyzed.Also,to make it efficient,the model is simplified by assuming a diagonal matrix for transformation,forming a simplified model LSEd.It is evaluated that the proposed model on four large-scale knowledge graph datasets for link prediction task.Ex-periments have shown that the proposed model achieves highest,or at least competitive,performance com-pared with the state-of-the-art models.

     

/

返回文章
返回