DPWriter:基于多样化规划分支强化学习的创意写作方法
DPWriter: Reinforcement Learning with Diverse Planning Branching for Creative Writing
January 14, 2026
作者: Qian Cao, Yahui Liu, Wei Bi, Yi Zhao, Ruihua Song, Xiting Wang, Ruiming Tang, Guorui Zhou, Han Li
cs.AI
摘要
基于强化学习的大语言模型增强方法常导致输出多样性下降,削弱了其在创意写作等开放式任务中的实用性。现有方法缺乏引导多样性探索的显式机制,往往将优化效率和性能置于多样性之上。本文提出一种围绕半结构化长链思维构建的强化学习框架,通过将生成过程分解为显式规划的中间步骤,在规划阶段基于多样性变化策略性地引入分叉路径,并采用群体感知的多样性奖励机制以激励差异化轨迹生成。创意写作基准测试表明,该方法在保持生成质量的同时显著提升了输出多样性,各项指标持续优于现有基线模型。
English
Reinforcement learning (RL)-based enhancement of large language models (LLMs) often leads to reduced output diversity, undermining their utility in open-ended tasks like creative writing. Current methods lack explicit mechanisms for guiding diverse exploration and instead prioritize optimization efficiency and performance over diversity. This paper proposes an RL framework structured around a semi-structured long Chain-of-Thought (CoT), in which the generation process is decomposed into explicitly planned intermediate steps. We introduce a Diverse Planning Branching method that strategically introduces divergence at the planning phase based on diversity variation, alongside a group-aware diversity reward to encourage distinct trajectories. Experimental results on creative writing benchmarks demonstrate that our approach significantly improves output diversity without compromising generation quality, consistently outperforming existing baselines.