ChatPaper.aiChatPaper

利用大型语言模型改进文本嵌入

Improving Text Embeddings with Large Language Models

December 31, 2023
作者: Liang Wang, Nan Yang, Xiaolong Huang, Linjun Yang, Rangan Majumder, Furu Wei
cs.AI

摘要

本文介绍了一种新颖且简单的方法,仅使用合成数据和不到1k的训练步骤即可获得高质量的文本嵌入。与现有方法不同,后者通常依赖于数十亿个弱监督文本对的多阶段中间预训练,然后再通过少量有标签数据进行微调。我们的方法不需要构建复杂的训练流程,也不依赖于通常受任务多样性和语言覆盖范围限制的手动收集数据集。我们利用专有的LLM生成数十万个文本嵌入任务的多样化合成数据,涵盖近100种语言。然后,我们使用标准对比损失在合成数据上微调开源的仅解码LLM。实验证明,我们的方法在高度竞争的文本嵌入基准测试中表现出色,而无需使用任何有标签数据。此外,当使用合成和有标签数据的混合进行微调时,我们的模型在BEIR和MTEB基准测试中取得了新的最先进结果。
English
In this paper, we introduce a novel and simple method for obtaining high-quality text embeddings using only synthetic data and less than 1k training steps. Unlike existing methods that often depend on multi-stage intermediate pre-training with billions of weakly-supervised text pairs, followed by fine-tuning with a few labeled datasets, our method does not require building complex training pipelines or relying on manually collected datasets that are often constrained by task diversity and language coverage. We leverage proprietary LLMs to generate diverse synthetic data for hundreds of thousands of text embedding tasks across nearly 100 languages. We then fine-tune open-source decoder-only LLMs on the synthetic data using standard contrastive loss. Experiments demonstrate that our method achieves strong performance on highly competitive text embedding benchmarks without using any labeled data. Furthermore, when fine-tuned with a mixture of synthetic and labeled data, our model sets new state-of-the-art results on the BEIR and MTEB benchmarks.
PDF8115December 15, 2024