ChatPaper.aiChatPaper

GPT是否会产生较少直译的翻译?

Do GPTs Produce Less Literal Translations?

May 26, 2023
作者: Vikas Raunak, Arul Menezes, Matt Post, Hany Hassan Awadallah
cs.AI

摘要

大型语言模型(LLMs)如GPT-3已经成为通用的语言模型,能够处理许多自然语言生成或理解任务。在机器翻译(MT)任务中,多项研究探讨了少样本提示机制,以引导LLMs生成更好的翻译。然而,对于这些翻译在质量上与标准神经机器翻译(NMT)模型生成的翻译有何不同的研究相对较少。在本研究中,我们从所生成翻译的文字直观性角度探讨了这些差异。通过涉及词对齐和单调性的文字直观性度量,我们发现GPTs生成的英语(E-X)翻译往往不够直观,但在MT质量指标上得分相似或更好。我们证明这一发现在人类评估中也得到了验证。然后,我们展示了在翻译包含惯用表达的句子时,这些差异尤为显著。
English
Large Language Models (LLMs) such as GPT-3 have emerged as general-purpose language models capable of addressing many natural language generation or understanding tasks. On the task of Machine Translation (MT), multiple works have investigated few-shot prompting mechanisms to elicit better translations from LLMs. However, there has been relatively little investigation on how such translations differ qualitatively from the translations generated by standard Neural Machine Translation (NMT) models. In this work, we investigate these differences in terms of the literalness of translations produced by the two systems. Using literalness measures involving word alignment and monotonicity, we find that translations out of English (E-X) from GPTs tend to be less literal, while exhibiting similar or better scores on MT quality metrics. We demonstrate that this finding is borne out in human evaluations as well. We then show that these differences are especially pronounced when translating sentences that contain idiomatic expressions.
PDF10December 15, 2024