生成式表征指导调整
Generative Representational Instruction Tuning
February 15, 2024
作者: Niklas Muennighoff, Hongjin Su, Liang Wang, Nan Yang, Furu Wei, Tao Yu, Amanpreet Singh, Douwe Kiela
cs.AI
摘要
所有基于文本的语言问题都可以归结为生成或嵌入。当前的模型只在这两者之一表现良好。我们引入了生成表征指导调整(GRIT),通过指导区分它们,训练大型语言模型来处理生成和嵌入任务。与其他开放模型相比,我们的结果GritLM 7B 在大规模文本嵌入基准测试(MTEB)上取得了新的最先进水平,并在各种生成任务上胜过其尺寸的所有模型。通过进一步扩展,GritLM 8x7B 在所有我们尝试过的开放生成语言模型中表现最佳,同时仍然是最佳的嵌入模型之一。值得注意的是,我们发现GRIT匹配仅在生成或嵌入数据上进行训练,因此我们可以在没有性能损失的情况下统一两者。通过GRIT进行统一,加速了检索增强生成(RAG)长文档的速度超过60%,不再需要单独的检索和生成模型。模型、代码等均可在https://github.com/ContextualAI/gritlm 免费获取。
English
All text-based language problems can be reduced to either generation or
embedding. Current models only perform well at one or the other. We introduce
generative representational instruction tuning (GRIT) whereby a large language
model is trained to handle both generative and embedding tasks by
distinguishing between them through instructions. Compared to other open
models, our resulting GritLM 7B sets a new state of the art on the Massive Text
Embedding Benchmark (MTEB) and outperforms all models up to its size on a range
of generative tasks. By scaling up further, GritLM 8x7B outperforms all open
generative language models that we tried while still being among the best
embedding models. Notably, we find that GRIT matches training on only
generative or embedding data, thus we can unify both at no performance loss.
Among other benefits, the unification via GRIT speeds up Retrieval-Augmented
Generation (RAG) by > 60% for long documents, by no longer requiring separate
retrieval and generation models. Models, code, etc. are freely available at
https://github.com/ContextualAI/gritlm.Summary
AI-Generated Summary