DPWriter:基于多样化规划分支强化学习的创意写作模型
DPWriter: Reinforcement Learning with Diverse Planning Branching for Creative Writing
January 14, 2026
作者: Qian Cao, Yahui Liu, Wei Bi, Yi Zhao, Ruihua Song, Xiting Wang, Ruiming Tang, Guorui Zhou, Han Li
cs.AI
摘要
基于强化学习的大型语言模型增强方法常导致输出多样性降低,从而削弱其在创意写作等开放式任务中的实用性。现有方法缺乏引导多样性探索的显式机制,往往将优化效率和性能置于多样性之上。本文提出一种围绕半结构化长链思维框架构建的强化学习方案,该方案将生成过程分解为显式规划的中间步骤。我们引入多样性规划分支方法,根据多样性变化在规划阶段策略性地引入分化,同时采用群体感知的多样性奖励机制以激励差异化轨迹生成。在创意写作基准测试上的实验结果表明,该方法在保持生成质量的同时显著提升了输出多样性,持续优于现有基线模型。
English
Reinforcement learning (RL)-based enhancement of large language models (LLMs) often leads to reduced output diversity, undermining their utility in open-ended tasks like creative writing. Current methods lack explicit mechanisms for guiding diverse exploration and instead prioritize optimization efficiency and performance over diversity. This paper proposes an RL framework structured around a semi-structured long Chain-of-Thought (CoT), in which the generation process is decomposed into explicitly planned intermediate steps. We introduce a Diverse Planning Branching method that strategically introduces divergence at the planning phase based on diversity variation, alongside a group-aware diversity reward to encourage distinct trajectories. Experimental results on creative writing benchmarks demonstrate that our approach significantly improves output diversity without compromising generation quality, consistently outperforming existing baselines.