ChatPaper.aiChatPaper

GPT 產生的翻譯是否較不直接?

Do GPTs Produce Less Literal Translations?

May 26, 2023
作者: Vikas Raunak, Arul Menezes, Matt Post, Hany Hassan Awadallah
cs.AI

摘要

大型語言模型(LLMs)如GPT-3已經成為通用語言模型,能夠應對許多自然語言生成或理解任務。在機器翻譯(MT)任務中,多項研究已調查少量提示機制,以引出LLMs更好的翻譯。然而,對於這些翻譯在質量上與標準神經機器翻譯(NMT)模型生成的翻譯有何不同,卻相對較少進行調查。在本研究中,我們從文字的直譯性方面調查這些差異。透過涉及詞彙對齊和單調性的直譯性度量,我們發現GPTs從英語(E-X)翻譯出的翻譯往往較不直譯,同時在MT質量指標上表現類似或更好。我們展示這一發現在人類評估中也得到證實。然後,我們展示這些差異在翻譯包含慣用表達的句子時尤為明顯。
English
Large Language Models (LLMs) such as GPT-3 have emerged as general-purpose language models capable of addressing many natural language generation or understanding tasks. On the task of Machine Translation (MT), multiple works have investigated few-shot prompting mechanisms to elicit better translations from LLMs. However, there has been relatively little investigation on how such translations differ qualitatively from the translations generated by standard Neural Machine Translation (NMT) models. In this work, we investigate these differences in terms of the literalness of translations produced by the two systems. Using literalness measures involving word alignment and monotonicity, we find that translations out of English (E-X) from GPTs tend to be less literal, while exhibiting similar or better scores on MT quality metrics. We demonstrate that this finding is borne out in human evaluations as well. We then show that these differences are especially pronounced when translating sentences that contain idiomatic expressions.
PDF10December 15, 2024