LimRank:推理密集型信息重排中的少即是多
LimRank: Less is More for Reasoning-Intensive Information Reranking
October 27, 2025
作者: Tingyu Song, Yilun Zhao, Siyue Zhang, Chen Zhao, Arman Cohan
cs.AI
摘要
现有方法通常依赖大规模微调来使大语言模型适应信息重排序任务,这种计算成本高昂。本研究证明,现代大语言模型仅需少量高质量监督数据即可实现有效适配。为此,我们设计了可复用的开源流程LIMRANK-SYNTHESIZER,用于生成多样化、高难度且贴近真实场景的重排序样本。基于此合成数据,我们微调出重排序模型LIMRANK,并在两个具有挑战性的基准测试(即面向推理密集型检索的BRIGHT和面向指令跟随检索的FollowIR)上开展评估。实验结果表明,LIMRANK仅需使用前人研究不足5%的训练数据即可达到具有竞争力的性能。进一步的消融实验验证了LIMRANK-SYNTHESIZER的有效性,以及LIMRANK在科学文献检索和面向知识密集型问题解决的检索增强生成等下游任务中强大的泛化能力。
English
Existing approaches typically rely on large-scale fine-tuning to adapt LLMs
for information reranking tasks, which is computationally expensive. In this
work, we demonstrate that modern LLMs can be effectively adapted using only
minimal, high-quality supervision. To enable this, we design
LIMRANK-SYNTHESIZER, a reusable and open-source pipeline for generating
diverse, challenging, and realistic reranking examples. Using this synthetic
data, we fine-tune our reranker model, LIMRANK. We evaluate LIMRANK on two
challenging benchmarks, i.e., BRIGHT for reasoning-intensive retrieval and
FollowIR for instruction-following retrieval. Our experiments demonstrate that
LIMRANK achieves competitive performance, while being trained on less than 5%
of the data typically used in prior work. Further ablation studies demonstrate
the effectiveness of LIMRANK-SYNTHESIZER and the strong generalization
capabilities of LIMRANK across downstream tasks, including scientific
literature search and retrieval-augmented generation for knowledge-intensive
problem solving.