一种基于上下文感知和知识增强的常识问答模型

A MODEL OF COMMONSENSE QUESTION ANSWERING BASED ON CONTEXT AWARENESS AND KNOWLEDGE ENHANCEMENT

  • 摘要: 常识问答旨在让机器模拟人类思考方式预测出正确答案,但因问题表述中不包含背景常识,从而对传统机器学习方法提出了巨大挑战。针对多背景知识融合问题,使用随机游走的方式获取常识知识图谱ConceptNet中答案实体的两跳相关实体,与给定问答文本融合,增强后的问题输入到预训练模型RoBERTa中,利用上下文感知注意力,强化了问题与答案之间的语义表示。实验结果表明,有效地引入外部知识后,CAARK 模型在CommonsenseQA数据集上有较好的表现,为解决常识问答问题提供了一种新范式。

     

    Abstract: Commonsense question answer QA aims to make models simulate the way humans solve problems to predict answers. Due to the lack of common sense in questions expression process, it brings great challenges to the traditional machine learning methods. For the multi-background knowledge fusion problem, a random walk was used to obtain two-hop related entities of the answer entity in the common sense knowledge graph ConceptNet, which was fused with the given question and answer text, and the enhanced question was input into the pretraining model RoBERTa, using context-aware attention, which strengthened the semantic representation between the question and answers. Experimental results indicate that after effectively introducing external knowledge, CAARK model performs well on the CommonsenseQA dataset, providing a new paradigm for solving common sense question answering problems.

     

/

返回文章
返回