mGTE:通用长文本表示和重新排序模型 用于多语言文本检索
mGTE: Generalized Long-Context Text Representation and Reranking Models for Multilingual Text Retrieval
July 29, 2024
作者: Xin Zhang, Yanzhao Zhang, Dingkun Long, Wen Xie, Ziqi Dai, Jialong Tang, Huan Lin, Baosong Yang, Pengjun Xie, Fei Huang, Meishan Zhang, Wenjie Li, Min Zhang
cs.AI
摘要
我们提出了系统性的工作,从零开始构建长上下文多语言文本表示模型(TRM)和重新排序器,用于文本检索。我们首先介绍了一个文本编码器(基础尺寸),采用RoPE和去填充技术增强,预训练在本地8192令牌上下文(长于先前多语言编码器的512)。然后,我们通过对比学习构建了一个混合TRM和交叉编码器重新排序器。评估结果显示,我们的文本编码器优于同等大小的先前最先进的XLM-R。与此同时,我们的TRM和重新排序器与最先进的大型BGE-M3模型的性能相匹配,并在长上下文检索基准上取得更好的结果。进一步的分析表明,我们提出的模型在训练和推断过程中表现出更高的效率。我们相信它们的效率和有效性可以使各种研究和工业应用受益。
English
We present systematic efforts in building long-context multilingual text
representation model (TRM) and reranker from scratch for text retrieval. We
first introduce a text encoder (base size) enhanced with RoPE and unpadding,
pre-trained in a native 8192-token context (longer than 512 of previous
multilingual encoders). Then we construct a hybrid TRM and a cross-encoder
reranker by contrastive learning. Evaluations show that our text encoder
outperforms the same-sized previous state-of-the-art XLM-R. Meanwhile, our TRM
and reranker match the performance of large-sized state-of-the-art BGE-M3
models and achieve better results on long-context retrieval benchmarks. Further
analysis demonstrate that our proposed models exhibit higher efficiency during
both training and inference. We believe their efficiency and effectiveness
could benefit various researches and industrial applications.Summary
AI-Generated Summary