Prompt检索器:指令训练的检索器可以像语言模型一样被提示
Promptriever: Instruction-Trained Retrievers Can Be Prompted Like Language Models
September 17, 2024
作者: Orion Weller, Benjamin Van Durme, Dawn Lawrie, Ashwin Paranjape, Yuhao Zhang, Jack Hessel
cs.AI
摘要
指令调整的语言模型(LM)能够响应命令,提供比基础模型更自然的用户界面。在这项工作中,我们提出了Promptriever,这是第一个能够像LM一样被提示的检索模型。为了训练Promptriever,我们从MS MARCO中策划并发布了一个新的实例级指令训练集,涵盖了近50万个实例。Promptriever不仅在标准检索任务上表现出色,而且能够遵循指令。我们观察到:(1)在遵循详细相关性指令方面取得了巨大进展(在FollowIR上达到了SoTA,+14.3 p-MRR / +3.1 nDCG),(2)对查询+指令中的词汇选择/措辞显著增强了鲁棒性(在InstructIR上的Robustness@10增加了12.9),以及(3)通过提示执行超参数搜索以可靠提高检索性能的能力(在BEIR上平均增加了1.4)。Promptriever展示了检索模型可以根据每个查询进行提示控制,为将来将LM提示技术与信息检索相结合的工作奠定了基础。
English
Instruction-tuned language models (LM) are able to respond to imperative
commands, providing a more natural user interface compared to their base
counterparts. In this work, we present Promptriever, the first retrieval model
able to be prompted like an LM. To train Promptriever, we curate and release a
new instance-level instruction training set from MS MARCO, spanning nearly 500k
instances. Promptriever not only achieves strong performance on standard
retrieval tasks, but also follows instructions. We observe: (1) large gains
(reaching SoTA) on following detailed relevance instructions (+14.3 p-MRR /
+3.1 nDCG on FollowIR), (2) significantly increased robustness to lexical
choices/phrasing in the query+instruction (+12.9 Robustness@10 on InstructIR),
and (3) the ability to perform hyperparameter search via prompting to reliably
improve retrieval performance (+1.4 average increase on BEIR). Promptriever
demonstrates that retrieval models can be controlled with prompts on a
per-query basis, setting the stage for future work aligning LM prompting
techniques with information retrieval.Summary
AI-Generated Summary