面向文本检索器领域自适应的引导影响采样方法
Influence Guided Sampling for Domain Adaptation of Text Retrievers
January 29, 2026
作者: Meet Doshi, Vishwajeet Kumar, Yulong Li, Jaydeep Sen
cs.AI
摘要
通用开放域稠密检索系统通常使用庞杂的语料库和搜索任务组合进行训练。应如何对这些异构语料库和任务进行训练采样?传统方法采用均匀采样、按实例数量比例采样或依赖专家级人工监督。众所周知,训练数据采样策略会显著影响模型性能,但在嵌入模型背景下如何寻找最优策略尚未得到充分研究。我们提出Inf-DDS——一种基于强化学习的自适应采样框架,通过影响力驱动的奖励信号动态调整训练数据集权重,且GPU消耗更为轻量。该技术通过迭代优化采样策略,优先选择能最大化目标开发集模型性能的数据集。我们在广泛文本检索任务上验证了采样策略的有效性:相较于基于梯度的采样方法,检索性能显著提升且适应能力更强,同时GPU计算成本降低1.5至4倍。在训练多语言bge-m3模型时,我们的策略实现了NDCG@10指标5.03的绝对提升;训练all-MiniLM-L6-v2模型时NDCG@10提升0.94,即使初始权重已由专家在大规模训练数据集上分配。
English
General-purpose open-domain dense retrieval systems are usually trained with a large, eclectic mix of corpora and search tasks. How should these diverse corpora and tasks be sampled for training? Conventional approaches sample them uniformly, proportional to their instance population sizes, or depend on human-level expert supervision. It is well known that the training data sampling strategy can greatly impact model performance. However, how to find the optimal strategy has not been adequately studied in the context of embedding models. We propose Inf-DDS, a novel reinforcement learning driven sampling framework that adaptively reweighs training datasets guided by influence-based reward signals and is much more lightweight with respect to GPU consumption. Our technique iteratively refines the sampling policy, prioritizing datasets that maximize model performance on a target development set. We evaluate the efficacy of our sampling strategy on a wide range of text retrieval tasks, demonstrating strong improvements in retrieval performance and better adaptation compared to existing gradient-based sampling methods, while also being 1.5x to 4x cheaper in GPU compute. Our sampling strategy achieves a 5.03 absolute NDCG@10 improvement while training a multilingual bge-m3 model and an absolute NDCG@10 improvement of 0.94 while training all-MiniLM-L6-v2, even when starting from expert-assigned weights on a large pool of training datasets.