通过跨语言对齐提升信息检索中的语义邻近性
Improving Semantic Proximity in Information Retrieval through Cross-Lingual Alignment
April 7, 2026
作者: Seongtae Hong, Youngjoon Jang, Jungseob Lee, Hyeonseok Moon, Heuiseok Lim
cs.AI
摘要
随着多语言文档可获取性与使用率的不断提升,跨语言信息检索(CLIR)已成为重要研究领域。传统CLIR任务通常在文档语言与查询语言相异、且文档采用单一连贯语言的设定下进行。本文指出,此类设定可能无法充分评估模型的跨语言对齐能力。具体而言,我们发现在英语与其他语言共存的文档库中,多数多语言检索模型倾向于优先选择不相关的英文文档,而非与查询语言相同的相关文档。为系统分析和量化该现象,我们设计了多种评估场景与指标,用于衡量多语言检索模型的跨语言对齐性能。此外,为提升模型在挑战性条件下的跨语言表现,我们提出一种旨在增强跨语言对齐的新型训练策略。仅使用包含2.8千样本的小型数据集,该方法在显著提升跨语言检索性能的同时,有效缓解了模型对英语文档的倾向性问题。大量实验分析表明,所提方法能显著增强多数多语言嵌入模型的跨语言对齐能力。
English
With the increasing accessibility and utilization of multilingual documents, Cross-Lingual Information Retrieval (CLIR) has emerged as an important research area. Conventionally, CLIR tasks have been conducted under settings where the language of documents differs from that of queries, and typically, the documents are composed in a single coherent language. In this paper, we highlight that in such a setting, the cross-lingual alignment capability may not be evaluated adequately. Specifically, we observe that, in a document pool where English documents coexist with another language, most multilingual retrievers tend to prioritize unrelated English documents over the related document written in the same language as the query. To rigorously analyze and quantify this phenomenon, we introduce various scenarios and metrics designed to evaluate the cross-lingual alignment performance of multilingual retrieval models. Furthermore, to improve cross-lingual performance under these challenging conditions, we propose a novel training strategy aimed at enhancing cross-lingual alignment. Using only a small dataset consisting of 2.8k samples, our method significantly improves the cross-lingual retrieval performance while simultaneously mitigating the English inclination problem. Extensive analyses demonstrate that the proposed method substantially enhances the cross-lingual alignment capabilities of most multilingual embedding models.