ChatPaper.aiChatPaper

基于评分标准的在策略蒸馏

Rubric-based On-policy Distillation

May 8, 2026
作者: Junfeng Fang, Zhepei Hong, Mao Zheng, Mingyang Song, Gengsheng Li, Houcheng Jiang, Dan Zhang, Haiyun Guo, Xiang Wang, Tat-Seng Chua
cs.AI

摘要

在线蒸馏(OPD)是一种强大的模型对齐范式,但其对教师logits的依赖限制了它在白盒场景中的应用。我们认为结构化语义评分标准可以替代教师logits,从而实现仅依赖教师生成响应的可扩展在线蒸馏。为证明这一点,我们提出了ROPD——一种基于评分标准的在线蒸馏的简洁且基础性框架。具体而言,ROPD通过师生对比诱导出提示相关的评分标准,进而利用这些评分标准对学生模型的生成结果进行评分,以实现在线策略优化。实验结果表明,ROPD在大多数场景下超越了先进的基于logits的在线蒸馏方法,且样本效率提升高达10倍。这些结果将基于评分标准的在线蒸馏确立为一种灵活且兼容黑盒的替代方案,可替代主流的基于logits的在线蒸馏,为在闭源和开源大语言模型间的可扩展蒸馏提供了简洁而强大的基准方法。代码已开源:https://github.com/Peregrine123/ROPD_official。
English
On-policy distillation (OPD) is a powerful paradigm for model alignment, yet its reliance on teacher logits restricts its application to white-box scenarios. We contend that structured semantic rubrics can serve as a scalable alternative to teacher logits, enabling OPD using only teacher-generated responses. To prove it, we introduce ROPD, a simple yet foundational framework for rubric-based OPD. Specifically, ROPD induces prompt-specific rubrics from teacher-student contrasts, and then utilizes these rubrics to score the student rollouts for on-policy optimization. Empirically, ROPD outperforms the advanced logit-based OPD methods across most scenarios, and achieving up to a 10x gain in sample efficiency. These results position rubric-based OPD as a flexible, black-box-compatible alternative to the prevailing logit-based OPD, offering a simple yet strong baseline for scalable distillation across proprietary and open-source LLMs. Code is available at https://github.com/Peregrine123/ROPD_official.
PDF11May 12, 2026