DocReward:一種用於結構化與風格化的文件獎勵模型
DocReward: A Document Reward Model for Structuring and Stylizing
October 13, 2025
作者: Junpeng Liu, Yuzhong Zhao, Bowen Cao, Jiayu Ding, Yilin Jia, Tengchao Lv, Yupan Huang, Shaohan Huang, Nan Yang, Li Dong, Lei Cui, Tao Ge, Xun Wang, Huitian Jiao, Sun Mao, FNU Kartik, Si-Qing Chen, Wai Lam, Furu Wei
cs.AI
摘要
近期,代理工作流的進展已實現了專業文件生成等任務的自動化。然而,這些方法主要關注文本質量,忽略了視覺結構和風格,而這些對於可讀性和吸引力至關重要。這一差距主要源於缺乏合適的獎勵模型來引導代理工作流生成具有更強結構和風格質量的文件。為此,我們提出了DocReward,這是一個基於文件結構和風格進行評估的文件獎勵模型。我們構建了一個多領域數據集DocPair,包含117K對文件,涵蓋32個領域和267種文件類型,每對文件包含內容相同但結構和風格不同的高專業性和低專業性文件。這使得模型能夠全面且獨立於文本質量地評估專業性。DocReward使用Bradley-Terry損失進行訓練,以對文件進行評分,並懲罰與註釋排名相矛盾的預測。為了評估獎勵模型的性能,我們創建了一個測試數據集,其中包含由受過良好教育的人類評估者排名的文件集。值得注意的是,DocReward在準確性上分別比GPT-4o和GPT-5高出30.6和19.4個百分點,展示了其相對於基線模型的優越性。在文件生成的外部評估中,DocReward獲得了顯著更高的勝率60.8%,而GPT-5的勝率為37.7%,這表明其在引導生成代理生成人類偏好的文件方面具有實用性。
English
Recent advances in agentic workflows have enabled the automation of tasks
such as professional document generation. However, they primarily focus on
textual quality, neglecting visual structure and style, which are crucial for
readability and engagement. This gap arises mainly from the absence of suitable
reward models to guide agentic workflows toward producing documents with
stronger structural and stylistic quality. To address this, we propose
DocReward, a document reward model that evaluates documents based on their
structure and style. We construct a multi-domain dataset DocPair of 117K paired
documents, covering 32 domains and 267 document types, each including a high-
and low-professionalism document with identical content but different structure
and style. This enables the model to evaluate professionalism comprehensively,
and in a textual-quality-agnostic way. DocReward is trained using the
Bradley-Terry loss to score documents, penalizing predictions that contradict
the annotated ranking. To assess the performance of reward models, we create a
test dataset containing document bundles ranked by well-educated human
evaluators. Notably, DocReward outperforms GPT-4o and GPT-5 in accuracy by 30.6
and 19.4 percentage points, respectively, demonstrating its superiority over
baselines. In an extrinsic evaluation of document generation, DocReward
achieves a significantly higher win rate of 60.8%, compared to GPT-5's 37.7%
win rate, demonstrating its utility in guiding generation agents toward
producing human-preferred documents.