首页 | 本学科首页   官方微博 | 高级检索  
     检索      


Generating knowledge aware explanation for natural language inference
Institution:1. School of Information Management, Central China Normal University, Wuhan, 430079, China;3. Center for Studies of Information Resources, Wuhan University, Wuhan, 430072, China;4. Institute of Medical Information, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, 100020, China;5. School of Information Management, Wuhan University, Wuhan, 430072, China
Abstract:Natural language inference (NLI) is an increasingly important task of natural language processing, and the explainable NLI generates natural language explanations (NLEs) in addition to label prediction, to make NLI explainable and acceptable. However, NLEs generated by current models often present problems that disobey of commonsense or lack of informativeness. In this paper, we propose a knowledge enhanced explainable NLI framework (KxNLI) by leveraging Knowledge Graph (KG) to address these problems. The subgraphs from KG are constructed based on the concept set of the input sequence. Contextual embedding of input and the graph embedding of subgraphs, is used to guide the NLE generation by using a copy mechanism. Furthermore, the generated NLEs are used to augment the original data. Experimental results show that the performance of KxNLI can achieve state-of-the-art (SOTA) results on the SNLI dataset when the pretrained model is fine-tuned on the augmented data. Besides, the proposed mechanism of knowledge enhancement and rationales utilization can achieve ideal performance on vanilla seq2seq model, and obtain better transfer ability when transferred to the MultiNLI dataset. In order to comprehensively evaluate generated NLEs, we design two metrics from the perspectives of the accuracy and informativeness, to measure the quality of NLEs, respectively. The results show that KxNLI can provide high quality NLEs while making accurate prediction.
Keywords:Natural language inference  Knowledge graph  Explainability of model
本文献已被 ScienceDirect 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号