ChatPaper.aiChatPaper

RevisEval:通过响应自适应参考文献改进LLM作为评判者

RevisEval: Improving LLM-as-a-Judge via Response-Adapted References

October 7, 2024
作者: Qiyuan Zhang, Yufei Wang, Tiezheng YU, Yuxin Jiang, Chuhan Wu, Liangyou Li, Yasheng Wang, Xin Jiang, Lifeng Shang, Ruiming Tang, Fuyuan Lyu, Chen Ma
cs.AI

摘要

通过近期研究的重大努力,LLM作为评判者已成为评估广泛任务中文本生成质量的一种经济高效替代方案,而非人类评估。然而,LLM作为评判者与人类评估之间仍存在可靠性差距。一个重要原因是评估过程中缺乏引导性神谕。受经典文本评估中广泛使用的参考角色的启发,我们引入了RevisEval,这是一种通过响应自适应参考的新型文本生成评估范式。RevisEval的驱动力是一个关键观察,即理想的参考应保持与待评估响应的必要相关性。具体而言,RevisEval利用大型语言模型(LLMs)的文本修订能力自适应地修订响应,然后将修订后的文本作为后续评估的参考(响应自适应参考)。大量实验证明,RevisEval在自由参考和基于参考的传统评估范式上优于使用LLM作为评判者的传统参考-无参考评估范式,跨自然语言生成任务和开放式指令遵循任务。更重要的是,我们的响应自适应参考可以进一步提升经典文本度量标准,如BLEU和BERTScore,与传统参考甚至与LLM作为评判者相媲美。还进行了详细分析以确认RevisEval在减少偏见、推理成本影响和参考相关性方面的有效性。
English
With significant efforts in recent studies, LLM-as-a-Judge has become a cost-effective alternative to human evaluation for assessing the text generation quality in a wide range of tasks. However, there still remains a reliability gap between LLM-as-a-Judge and human evaluation. One important reason is the lack of guided oracles in the evaluation process. Motivated by the role of reference pervasively used in classic text evaluation, we introduce RevisEval, a novel text generation evaluation paradigm via the response-adapted references. RevisEval is driven by the key observation that an ideal reference should maintain the necessary relevance to the response to be evaluated. Specifically, RevisEval leverages the text revision capabilities of large language models (LLMs) to adaptively revise the response, then treat the revised text as the reference (response-adapted reference) for the subsequent evaluation. Extensive experiments demonstrate that RevisEval outperforms traditional reference-free and reference-based evaluation paradigms that use LLM-as-a-Judge across NLG tasks and open-ended instruction-following tasks. More importantly, our response-adapted references can further boost the classical text metrics, e.g., BLEU and BERTScore, compared to traditional references and even rival the LLM-as-a-Judge. A detailed analysis is also conducted to confirm RevisEval's effectiveness in bias reduction, the impact of inference cost, and reference relevance.

Summary

AI-Generated Summary

PDF133November 16, 2024