ChARM:基於角色特徵的行為適應性獎勵建模用於高級角色扮演語言代理
ChARM: Character-based Act-adaptive Reward Modeling for Advanced Role-Playing Language Agents
May 29, 2025
作者: Feiteng Fang, Ting-En Lin, Yuchuan Wu, Xiong Liu, Xiang Huang, Dingwei Chen, Jing Ye, Haonan Zhang, Liang Zhu, Hamid Alinejad-Rokny, Min Yang, Fei Huang, Yongbin Li
cs.AI
摘要
角色扮演語言代理(RPLAs)旨在模擬角色以實現真實且引人入勝的人機互動。然而,傳統的獎勵模型在可擴展性和適應主觀對話偏好方面往往面臨挑戰。我們提出了ChARM,一種基於角色的行為自適應獎勵模型,通過兩項創新來應對這些挑戰:(1)行為自適應邊界,顯著提升了學習效率和泛化能力;(2)利用大規模未標註數據的自我進化機制,以提升訓練覆蓋率。此外,我們引入了RoleplayPref,這是首個專為RPLAs設計的大規模偏好數據集,包含1,108個角色、13個子類別和16,888個雙語對話,以及RoleplayEval,一個專用的評估基準。實驗結果顯示,在偏好排名上相較於傳統的Bradley-Terry模型提升了13%。進一步地,將ChARM生成的獎勵應用於偏好學習技術(如直接偏好優化)在CharacterEval和RoleplayEval上取得了最先進的成果。代碼和數據集可在https://github.com/calubkk/ChARM獲取。
English
Role-Playing Language Agents (RPLAs) aim to simulate characters for realistic
and engaging human-computer interactions. However, traditional reward models
often struggle with scalability and adapting to subjective conversational
preferences. We propose ChARM, a Character-based Act-adaptive Reward Model,
addressing these challenges through two innovations: (1) an act-adaptive margin
that significantly enhances learning efficiency and generalizability, and (2) a
self-evolution mechanism leveraging large-scale unlabeled data to improve
training coverage. Additionally, we introduce RoleplayPref, the first
large-scale preference dataset specifically for RPLAs, featuring 1,108
characters, 13 subcategories, and 16,888 bilingual dialogues, alongside
RoleplayEval, a dedicated evaluation benchmark. Experimental results show a 13%
improvement over the conventional Bradley-Terry model in preference rankings.
Furthermore, applying ChARM-generated rewards to preference learning techniques
(e.g., direct preference optimization) achieves state-of-the-art results on
CharacterEval and RoleplayEval. Code and dataset are available at
https://github.com/calubkk/ChARM.