通过跨语言对齐提升信息检索中的语义邻近性
Improving Semantic Proximity in Information Retrieval through Cross-Lingual Alignment
April 7, 2026
作者: Seongtae Hong, Youngjoon Jang, Jungseob Lee, Hyeonseok Moon, Heuiseok Lim
cs.AI
摘要
随着多语言文档的普及度与使用率日益提升,跨语言信息检索(CLIR)已成为重要研究领域。传统CLIR任务通常在文档语言与查询语言相异的设定下展开,且文档普遍由单一连贯语言构成。本文指出,在此类设定下,跨语言对齐能力的评估可能不够充分。具体而言,我们发现在英语文档与其他语言文档共存的资源库中,多数多语言检索模型倾向于优先选取不相关的英文文档,而非查询语言相同的相关文档。为系统分析和量化该现象,我们设计了多种评估场景与指标,用以检验多语言检索模型的跨语言对齐性能。此外,针对此类挑战性场景下的跨语言性能提升,我们提出一种旨在强化跨语言对齐的新型训练策略。仅使用包含2800个样本的小型数据集,该方法在显著提升跨语言检索性能的同时,有效缓解了模型对英语文档的倾向性问题。大量实验分析表明,所提方法能显著增强多数多语言嵌入模型的跨语言对齐能力。
English
With the increasing accessibility and utilization of multilingual documents, Cross-Lingual Information Retrieval (CLIR) has emerged as an important research area. Conventionally, CLIR tasks have been conducted under settings where the language of documents differs from that of queries, and typically, the documents are composed in a single coherent language. In this paper, we highlight that in such a setting, the cross-lingual alignment capability may not be evaluated adequately. Specifically, we observe that, in a document pool where English documents coexist with another language, most multilingual retrievers tend to prioritize unrelated English documents over the related document written in the same language as the query. To rigorously analyze and quantify this phenomenon, we introduce various scenarios and metrics designed to evaluate the cross-lingual alignment performance of multilingual retrieval models. Furthermore, to improve cross-lingual performance under these challenging conditions, we propose a novel training strategy aimed at enhancing cross-lingual alignment. Using only a small dataset consisting of 2.8k samples, our method significantly improves the cross-lingual retrieval performance while simultaneously mitigating the English inclination problem. Extensive analyses demonstrate that the proposed method substantially enhances the cross-lingual alignment capabilities of most multilingual embedding models.