ChatPaper.aiChatPaper

在语言模型中使用软提示和随机游走触发多跳推理进行问答

Triggering Multi-Hop Reasoning for Question Answering in Language Models using Soft Prompts and Random Walks

June 6, 2023
作者: Kanishka Misra, Cicero Nogueira dos Santos, Siamak Shakeri
cs.AI

摘要

尽管预训练语言模型(LMs)能够轻松记忆关于实体的世界知识,但在组合两个或多个事实以执行多跳推理的问答任务中仍存在困难。在这项工作中,我们提出了一种技术,通过依赖结构化知识图中的随机游走来改善这一局限性。具体而言,我们使用软提示来引导LMs通过学习将多跳问题映射到通往答案的随机游走路径,从而将它们编码的知识链接在一起。将我们的方法应用于两个T5 LM上,在需要进行2跳推理的问题回答中,相较于标准调整方法,显示出了显著的改进。
English
Despite readily memorizing world knowledge about entities, pre-trained language models (LMs) struggle to compose together two or more facts to perform multi-hop reasoning in question-answering tasks. In this work, we propose techniques that improve upon this limitation by relying on random walks over structured knowledge graphs. Specifically, we use soft prompts to guide LMs to chain together their encoded knowledge by learning to map multi-hop questions to random walk paths that lead to the answer. Applying our methods on two T5 LMs shows substantial improvements over standard tuning approaches in answering questions that require 2-hop reasoning.
PDF10December 15, 2024