ChatPaper.aiChatPaper

利用軟提示和隨機漫步觸發語言模型中的多跳推理以進行問答

Triggering Multi-Hop Reasoning for Question Answering in Language Models using Soft Prompts and Random Walks

June 6, 2023
作者: Kanishka Misra, Cicero Nogueira dos Santos, Siamak Shakeri
cs.AI

摘要

儘管預先訓練的語言模型(LMs)能夠輕鬆記憶有關實體的世界知識,但在組合兩個或多個事實以執行多躍推理的問答任務中卻遇到困難。在這項工作中,我們提出了一些技術,通過依賴結構化知識圖上的隨機遊走來改善這個限制。具體來說,我們使用軟提示來引導LMs通過學習將多躍問題映射到導致答案的隨機遊走路徑,以鏈接它們編碼的知識。將我們的方法應用於兩個T5 LM上,在回答需要2躍推理的問題方面,顯示出明顯優於標準調整方法的顯著改進。
English
Despite readily memorizing world knowledge about entities, pre-trained language models (LMs) struggle to compose together two or more facts to perform multi-hop reasoning in question-answering tasks. In this work, we propose techniques that improve upon this limitation by relying on random walks over structured knowledge graphs. Specifically, we use soft prompts to guide LMs to chain together their encoded knowledge by learning to map multi-hop questions to random walk paths that lead to the answer. Applying our methods on two T5 LMs shows substantial improvements over standard tuning approaches in answering questions that require 2-hop reasoning.
PDF10December 15, 2024