ChatPaper.aiChatPaper

基於評分量表的同策略蒸餾

Rubric-based On-policy Distillation

May 8, 2026
作者: Junfeng Fang, Zhepei Hong, Mao Zheng, Mingyang Song, Gengsheng Li, Houcheng Jiang, Dan Zhang, Haiyun Guo, Xiang Wang, Tat-Seng Chua
cs.AI

摘要

在线策略蒸馏(OPD)是一种强大的模型对齐范式,但其对教师logits的依赖限制了其仅能应用于白盒场景。我们提出,结构化语义评分标准可作为教师logits的可扩展替代方案,使OPD仅需利用教师生成的响应即可实现。为验证这一观点,我们引入ROPD——一个基于评分标准的简单而基础的OPD框架。具体而言,ROPD通过教师与学生模型之间的对比生成提示特异性评分标准,并利用这些评分标准对学生模型生成样本进行评分,从而实现在线策略优化。实验结果表明,在多数场景下,ROPD性能优于先进的基于logits的OPD方法,样本效率最高可提升10倍。这些结果确立了基于评分标准的OPD作为对主流的基于logits的OPD的一种灵活、兼容黑盒的替代方案,为跨专有模型与开源大语言模型的可扩展蒸馏提供了简单而强大的基线。代码已开源:https://github.com/Peregrine123/ROPD_official。
English
On-policy distillation (OPD) is a powerful paradigm for model alignment, yet its reliance on teacher logits restricts its application to white-box scenarios. We contend that structured semantic rubrics can serve as a scalable alternative to teacher logits, enabling OPD using only teacher-generated responses. To prove it, we introduce ROPD, a simple yet foundational framework for rubric-based OPD. Specifically, ROPD induces prompt-specific rubrics from teacher-student contrasts, and then utilizes these rubrics to score the student rollouts for on-policy optimization. Empirically, ROPD outperforms the advanced logit-based OPD methods across most scenarios, and achieving up to a 10x gain in sample efficiency. These results position rubric-based OPD as a flexible, black-box-compatible alternative to the prevailing logit-based OPD, offering a simple yet strong baseline for scalable distillation across proprietary and open-source LLMs. Code is available at https://github.com/Peregrine123/ROPD_official.
PDF11May 12, 2026