ChatPaper.aiChatPaper

恶魔就在于错误:利用大型语言模型进行细粒度机器翻译评估

The Devil is in the Errors: Leveraging Large Language Models for Fine-grained Machine Translation Evaluation

August 14, 2023
作者: Patrick Fernandes, Daniel Deutsch, Mara Finkelstein, Parker Riley, André F. T. Martins, Graham Neubig, Ankush Garg, Jonathan H. Clark, Markus Freitag, Orhan Firat
cs.AI

摘要

机器翻译(MT)的自动评估是推动MT系统快速迭代发展的关键工具。尽管在估计单一标量质量得分方面取得了相当大的进展,但当前的度量标准缺乏注解单个错误等更详细方案的信息量,例如多维质量度量(MQM)。本文通过提出AutoMQM来填补这一空白,这是一种利用大型语言模型(LLMs)的推理和上下文学习能力,并要求其识别和分类翻译中错误的提示技术。我们首先通过简单的得分预测提示评估最近的LLMs,如PaLM和PaLM-2,然后通过上下文学习和微调研究标记数据的影响。接着,我们使用PaLM-2模型评估AutoMQM,并发现与仅提示得分相比(尤其是对于更大的模型),它提高了性能,同时通过与人类注释相一致的错误跨度提供了可解释性。
English
Automatic evaluation of machine translation (MT) is a critical tool driving the rapid iterative development of MT systems. While considerable progress has been made on estimating a single scalar quality score, current metrics lack the informativeness of more detailed schemes that annotate individual errors, such as Multidimensional Quality Metrics (MQM). In this paper, we help fill this gap by proposing AutoMQM, a prompting technique which leverages the reasoning and in-context learning capabilities of large language models (LLMs) and asks them to identify and categorize errors in translations. We start by evaluating recent LLMs, such as PaLM and PaLM-2, through simple score prediction prompting, and we study the impact of labeled data through in-context learning and finetuning. We then evaluate AutoMQM with PaLM-2 models, and we find that it improves performance compared to just prompting for scores (with particularly large gains for larger models) while providing interpretability through error spans that align with human annotations.
PDF60December 15, 2024