ChatPaper.aiChatPaper

並非所有語言在LLM中平等:通過跨語言思維提示來提高多語能力

Not All Languages Are Created Equal in LLMs: Improving Multilingual Capability by Cross-Lingual-Thought Prompting

May 11, 2023
作者: Haoyang Huang, Tianyi Tang, Dongdong Zhang, Wayne Xin Zhao, Ting Song, Yan Xia, Furu Wei
cs.AI

摘要

大型語言模型(LLMs)展示了令人印象深刻的多語能力,但它們在不同語言之間的表現差異很大。在這項工作中,我們介紹了一種簡單而有效的方法,稱為跨語言思維提示(XLT),以系統性地提高LLMs的多語能力。具體而言,XLT是一個通用的模板提示,可以激發跨語言和邏輯推理能力,從而增強不同語言下的任務表現。我們對涉及推理、理解和生成任務的7個典型基準進行了全面評估,涵蓋了高資源和低資源語言。實驗結果表明,XLT不僅顯著提高了各種多語任務的表現,還顯著縮小了不同語言下每個任務的平均表現和最佳表現之間的差距。值得注意的是,XLT在算術推理和開放領域問答任務中帶來了超過10個平均改進點。
English
Large language models (LLMs) demonstrate impressive multilingual capability, but their performance varies substantially across different languages. In this work, we introduce a simple yet effective method, called cross-lingual-thought prompting (XLT), to systematically improve the multilingual capability of LLMs. Specifically, XLT is a generic template prompt that stimulates cross-lingual and logical reasoning skills to enhance task performance across languages. We conduct comprehensive evaluations on 7 typical benchmarks related to reasoning, understanding, and generation tasks, covering both high-resource and low-resource languages. Experimental results show that XLT not only remarkably enhances the performance of various multilingual tasks but also significantly reduces the gap between the average performance and the best performance of each task in different languages. Notably, XLT brings over 10 points of average improvement in arithmetic reasoning and open-domain question-answering tasks.
PDF10December 15, 2024