SuperWriter:基於大型語言模型的反思驅動長文生成
SuperWriter: Reflection-Driven Long-Form Generation with Large Language Models
June 4, 2025
作者: Yuhao Wu, Yushi Bai, Zhiqiang Hu, Juanzi Li, Roy Ka-Wei Lee
cs.AI
摘要
長文本生成對於大型語言模型(LLMs)而言仍是一大挑戰,尤其是在保持連貫性、確保邏輯一致性以及隨著序列長度增加而維持文本質量方面。為解決這些限制,我們提出了SuperWriter-Agent,這是一個基於代理的框架,旨在提升長文本生成的質量和一致性。SuperWriter-Agent在生成流程中引入了明確的結構化思維,通過規劃和精煉階段,引導模型遵循更為深思熟慮且認知基礎的過程,類似於專業作家的寫作方式。基於此框架,我們構建了一個監督微調數據集,用於訓練一個7B的SuperWriter-LM。我們進一步開發了一種分層的直接偏好優化(DPO)程序,該程序利用蒙特卡洛樹搜索(MCTS)來傳播最終質量評估,並相應地優化每個生成步驟。多樣化基準測試的實證結果表明,SuperWriter-LM在自動評估和人工評估中均達到了最先進的性能,甚至超越了更大規模的基線模型。此外,全面的消融研究證明了分層DPO的有效性,並強調了引入結構化思維步驟對於提升長文本生成質量的價值。
English
Long-form text generation remains a significant challenge for large language
models (LLMs), particularly in maintaining coherence, ensuring logical
consistency, and preserving text quality as sequence length increases. To
address these limitations, we propose SuperWriter-Agent, an agent-based
framework designed to enhance the quality and consistency of long-form text
generation. SuperWriter-Agent introduces explicit structured thinking-through
planning and refinement stages into the generation pipeline, guiding the model
to follow a more deliberate and cognitively grounded process akin to that of a
professional writer. Based on this framework, we construct a supervised
fine-tuning dataset to train a 7B SuperWriter-LM. We further develop a
hierarchical Direct Preference Optimization (DPO) procedure that uses Monte
Carlo Tree Search (MCTS) to propagate final quality assessments and optimize
each generation step accordingly. Empirical results across diverse benchmarks
demonstrate that SuperWriter-LM achieves state-of-the-art performance,
surpassing even larger-scale baseline models in both automatic evaluation and
human evaluation. Furthermore, comprehensive ablation studies demonstrate the
effectiveness of hierarchical DPO and underscore the value of incorporating
structured thinking steps to improve the quality of long-form text generation.