ChatPaper.aiChatPaper

RevisEval:通過響應適應參考資料改進LLM作為法官

RevisEval: Improving LLM-as-a-Judge via Response-Adapted References

October 7, 2024
作者: Qiyuan Zhang, Yufei Wang, Tiezheng YU, Yuxin Jiang, Chuhan Wu, Liangyou Li, Yasheng Wang, Xin Jiang, Lifeng Shang, Ruiming Tang, Fuyuan Lyu, Chen Ma
cs.AI

摘要

隨著近期研究的重大努力,以LLM作為評判者已成為評估廣泛任務中文本生成質量的成本效益替代方案,取代了人類評估。然而,LLM作為評判者與人類評估之間仍存在可靠性差距。其中一個重要原因是評估過程中缺乏引導性的指導者。受到經典文本評估中普遍使用的參考角色的啟發,我們引入了RevisEval,一種通過響應適應參考的新型文本生成評估範式。RevisEval的驅動力在於一個關鍵觀察,即理想的參考應保持與待評估的響應的必要相關性。具體而言,RevisEval利用大型語言模型(LLMs)的文本修訂能力來自適應性地修改響應,然後將修改後的文本作為參考(響應適應參考)用於後續評估。廣泛的實驗表明,RevisEval在自由參考和基於參考的傳統評估範式上優於使用LLM作為評判者的自然語言生成任務和開放式指示遵循任務。更重要的是,我們的響應適應參考可以進一步提升經典文本指標,例如BLEU和BERTScore,相較於傳統參考甚至與LLM作為評判者相媲美。還進行了詳細分析以確認RevisEval在減少偏見、推理成本影響和參考相關性方面的有效性。
English
With significant efforts in recent studies, LLM-as-a-Judge has become a cost-effective alternative to human evaluation for assessing the text generation quality in a wide range of tasks. However, there still remains a reliability gap between LLM-as-a-Judge and human evaluation. One important reason is the lack of guided oracles in the evaluation process. Motivated by the role of reference pervasively used in classic text evaluation, we introduce RevisEval, a novel text generation evaluation paradigm via the response-adapted references. RevisEval is driven by the key observation that an ideal reference should maintain the necessary relevance to the response to be evaluated. Specifically, RevisEval leverages the text revision capabilities of large language models (LLMs) to adaptively revise the response, then treat the revised text as the reference (response-adapted reference) for the subsequent evaluation. Extensive experiments demonstrate that RevisEval outperforms traditional reference-free and reference-based evaluation paradigms that use LLM-as-a-Judge across NLG tasks and open-ended instruction-following tasks. More importantly, our response-adapted references can further boost the classical text metrics, e.g., BLEU and BERTScore, compared to traditional references and even rival the LLM-as-a-Judge. A detailed analysis is also conducted to confirm RevisEval's effectiveness in bias reduction, the impact of inference cost, and reference relevance.

Summary

AI-Generated Summary

PDF133November 16, 2024