ChatPaper.aiChatPaper

DocReward:一种用于文档结构化与风格化的奖励模型

DocReward: A Document Reward Model for Structuring and Stylizing

October 13, 2025
作者: Junpeng Liu, Yuzhong Zhao, Bowen Cao, Jiayu Ding, Yilin Jia, Tengchao Lv, Yupan Huang, Shaohan Huang, Nan Yang, Li Dong, Lei Cui, Tao Ge, Xun Wang, Huitian Jiao, Sun Mao, FNU Kartik, Si-Qing Chen, Wai Lam, Furu Wei
cs.AI

摘要

近期,智能工作流技术的进步已实现了诸如专业文档生成等任务的自动化。然而,这些技术主要关注文本质量,忽视了视觉结构和风格,而这两者对于提升文档的可读性和吸引力至关重要。这一差距主要源于缺乏合适的奖励模型来引导智能工作流生成具有更强结构和风格质量的文档。为此,我们提出了DocReward,一个基于文档结构和风格进行评估的文档奖励模型。我们构建了一个跨领域的文档对数据集DocPair,包含117K对文档,涵盖32个领域和267种文档类型,每对文档内容相同但结构和风格各异,分别代表高专业度和低专业度。这使得模型能够全面且独立于文本质量地评估专业度。DocReward采用Bradley-Terry损失函数进行训练,对文档进行评分,并对与标注排序相矛盾的预测进行惩罚。为了评估奖励模型的性能,我们创建了一个测试数据集,其中包含由受过良好教育的人类评估者排名的文档组。值得注意的是,DocReward在准确率上分别比GPT-4o和GPT-5高出30.6和19.4个百分点,显示出其相对于基线的优越性。在文档生成的外部评估中,DocReward以60.8%的显著更高胜率,相较于GPT-5的37.7%胜率,证明了其在引导生成代理生产更符合人类偏好的文档方面的实用性。
English
Recent advances in agentic workflows have enabled the automation of tasks such as professional document generation. However, they primarily focus on textual quality, neglecting visual structure and style, which are crucial for readability and engagement. This gap arises mainly from the absence of suitable reward models to guide agentic workflows toward producing documents with stronger structural and stylistic quality. To address this, we propose DocReward, a document reward model that evaluates documents based on their structure and style. We construct a multi-domain dataset DocPair of 117K paired documents, covering 32 domains and 267 document types, each including a high- and low-professionalism document with identical content but different structure and style. This enables the model to evaluate professionalism comprehensively, and in a textual-quality-agnostic way. DocReward is trained using the Bradley-Terry loss to score documents, penalizing predictions that contradict the annotated ranking. To assess the performance of reward models, we create a test dataset containing document bundles ranked by well-educated human evaluators. Notably, DocReward outperforms GPT-4o and GPT-5 in accuracy by 30.6 and 19.4 percentage points, respectively, demonstrating its superiority over baselines. In an extrinsic evaluation of document generation, DocReward achieves a significantly higher win rate of 60.8%, compared to GPT-5's 37.7% win rate, demonstrating its utility in guiding generation agents toward producing human-preferred documents.
PDF263October 14, 2025