ChatPaper.aiChatPaper

GRACE:基于对比策略优化的生成式表征学习

GRACE: Generative Representation Learning via Contrastive Policy Optimization

October 6, 2025
作者: Jiashuo Sun, Shixuan Liu, Zhaochen Su, Xianrui Zhong, Pengcheng Jiang, Bowen Jin, Peiran Li, Weijia Shi, Jiawei Han
cs.AI

摘要

当前训练大型语言模型(LLMs)作为文本编码器的主流方法依赖于对比损失,这些方法将模型视为黑箱函数,舍弃了其生成与推理能力,转而追求静态嵌入表示。我们提出了GRACE(通过对比策略优化进行生成式表示学习),这一新颖框架重新构想对比信号,不再将其视为需最小化的损失,而是作为引导生成策略的奖励。在GRACE中,LLM扮演策略角色,生成明确且人类可理解的解释——即对其语义理解的结构化自然语言阐述。这些解释随后通过均值池化编码为高质量嵌入。利用策略梯度优化,我们采用多组件奖励函数训练模型,该函数最大化查询正例对之间的相似度,同时最小化与负例的相似度。这一过程将LLM从不可知的编码器转变为可解释的智能体,其推理过程透明且可审查。在MTEB基准测试中,GRACE实现了广泛的跨类别性能提升:在四个基础模型上,监督设置下的总体得分较基础模型提高了11.5%,无监督版本则提升了6.9%,同时保持了模型的通用能力。本工作将对比目标视为对解释的奖励,统一了表示学习与生成,以产生更强的嵌入和透明的解释。模型、数据及代码已公开于https://github.com/GasolSun36/GRACE。
English
Prevailing methods for training Large Language Models (LLMs) as text encoders rely on contrastive losses that treat the model as a black box function, discarding its generative and reasoning capabilities in favor of static embeddings. We introduce GRACE (Generative Representation Learning via Contrastive Policy Optimization), a novel framework that reimagines contrastive signals not as losses to be minimized, but as rewards that guide a generative policy. In GRACE, the LLM acts as a policy that produces explicit, human-interpretable rationales--structured natural language explanations of its semantic understanding. These rationales are then encoded into high-quality embeddings via mean pooling. Using policy gradient optimization, we train the model with a multi-component reward function that maximizes similarity between query positive pairs and minimizes similarity with negatives. This transforms the LLM from an opaque encoder into an interpretable agent whose reasoning process is transparent and inspectable. On MTEB benchmark, GRACE yields broad cross category gains: averaged over four backbones, the supervised setting improves overall score by 11.5% over base models, and the unsupervised variant adds 6.9%, while preserving general capabilities. This work treats contrastive objectives as rewards over rationales, unifying representation learning with generation to produce stronger embeddings and transparent rationales. The model, data and code are available at https://github.com/GasolSun36/GRACE.
PDF82October 8, 2025