ChatPaper.aiChatPaper

零-shot跨語言轉移用於語法錯誤檢測中的合成數據生成

Zero-shot Cross-Lingual Transfer for Synthetic Data Generation in Grammatical Error Detection

July 16, 2024
作者: Gaetan Lopez Latouche, Marc-André Carbonneau, Ben Swanson
cs.AI

摘要

語法錯誤檢測(GED)方法在很大程度上依賴於人工標註的錯誤語料庫。然而,在許多資源稀缺的語言中,這些標註是不可用的。本文探討了在這種情況下的GED。利用多語言預訓練語言模型的零-shot跨語言轉移能力,我們使用來自多種語言的數據來訓練模型,以在其他語言中生成合成錯誤。然後使用這些合成錯誤語料庫來訓練GED模型。具體而言,我們提出了一種兩階段微調流程,其中GED模型首先在目標語言的多語言合成數據上進行微調,然後在來自源語言的人工標註GED語料庫上進行微調。這種方法優於當前最先進的無標註GED方法。我們還分析了我們的方法和其他強基準模型產生的錯誤,發現我們的方法產生的錯誤更加多樣且更接近人類錯誤。
English
Grammatical Error Detection (GED) methods rely heavily on human annotated error corpora. However, these annotations are unavailable in many low-resource languages. In this paper, we investigate GED in this context. Leveraging the zero-shot cross-lingual transfer capabilities of multilingual pre-trained language models, we train a model using data from a diverse set of languages to generate synthetic errors in other languages. These synthetic error corpora are then used to train a GED model. Specifically we propose a two-stage fine-tuning pipeline where the GED model is first fine-tuned on multilingual synthetic data from target languages followed by fine-tuning on human-annotated GED corpora from source languages. This approach outperforms current state-of-the-art annotation-free GED methods. We also analyse the errors produced by our method and other strong baselines, finding that our approach produces errors that are more diverse and more similar to human errors.

Summary

AI-Generated Summary

PDF24November 28, 2024