NV-Embed:改进的技术用于训练LLMs作为通用嵌入模型
NV-Embed: Improved Techniques for Training LLMs as Generalist Embedding Models
May 27, 2024
作者: Chankyu Lee, Rajarshi Roy, Mengyao Xu, Jonathan Raiman, Mohammad Shoeybi, Bryan Catanzaro, Wei Ping
cs.AI
摘要
基于仅解码器的大型语言模型(LLM)嵌入模型开始在一般文本嵌入任务中表现优于基于BERT或T5的嵌入模型,包括基于密集向量的检索。在这项工作中,我们引入了NV-Embed模型,采用各种架构设计和训练程序,显著提升LLM作为多功能嵌入模型的性能,同时保持其简单性和可复现性。对于模型架构,我们提出了一个潜在的注意力层来获取汇总嵌入,与从LLM中使用平均池化或最后的<EOS>标记嵌入相比,这一方法始终改善了检索和下游任务的准确性。为了增强表示学习,我们在对比训练期间移除了LLM的因果注意力掩码。对于模型训练,我们引入了一个两阶段对比指导调整方法。首先,它在检索数据集上应用带有指导的对比训练,利用批内负例和策划的困难负例。在第二阶段,它将各种非检索数据集融合到指导调整中,这不仅提高了非检索任务的准确性,还改善了检索性能。结合这些技术,我们的NV-Embed模型仅使用公开可用数据,在2024年5月24日取得了69.32的最高分,排名Massive Text Embedding Benchmark(MTEB)第一(截至2024年5月24日),涵盖了56个任务,包括检索、重新排序、分类、聚类和语义文本相似性任务。值得注意的是,我们的模型还在MTEB基准测试中的15个检索任务中获得了59.36的最高分(也称为BEIR)。我们将在以下网址开源该模型:https://huggingface.co/nvidia/NV-Embed-v1。
English
Decoder-only large language model (LLM)-based embedding models are beginning
to outperform BERT or T5-based embedding models in general-purpose text
embedding tasks, including dense vector-based retrieval. In this work, we
introduce the NV-Embed model with a variety of architectural designs and
training procedures to significantly enhance the performance of LLM as a
versatile embedding model, while maintaining its simplicity and
reproducibility. For model architecture, we propose a latent attention layer to
obtain pooled embeddings, which consistently improves retrieval and downstream
task accuracy compared to mean pooling or using the last <EOS> token embedding
from LLMs. To enhance representation learning, we remove the causal attention
mask of LLMs during contrastive training. For model training, we introduce a
two-stage contrastive instruction-tuning method. It first applies contrastive
training with instructions on retrieval datasets, utilizing in-batch negatives
and curated hard negative examples. At stage-2, it blends various non-retrieval
datasets into instruction tuning, which not only enhances non-retrieval task
accuracy but also improves retrieval performance. Combining these techniques,
our NV-Embed model, using only publicly available data, has achieved a
record-high score of 69.32, ranking No. 1 on the Massive Text Embedding
Benchmark (MTEB) (as of May 24, 2024), with 56 tasks, encompassing retrieval,
reranking, classification, clustering, and semantic textual similarity tasks.
Notably, our model also attains the highest score of 59.36 on 15 retrieval
tasks in the MTEB benchmark (also known as BEIR). We will open-source the model
at: https://huggingface.co/nvidia/NV-Embed-v1.Summary
AI-Generated Summary