ChatPaper.aiChatPaper

當數據樣本唾手可得:擴大推理計算規模對多語言大模型的益處

When Life Gives You Samples: The Benefits of Scaling up Inference Compute for Multilingual LLMs

June 25, 2025
作者: Ammar Khairi, Daniel D'souza, Ye Shen, Julia Kreutzer, Sara Hooker
cs.AI

摘要

近期大型語言模型(LLMs)的進展,已將焦點轉向擴展推理時期的計算資源,以在不重新訓練模型的情況下提升效能。一種常見的方法是平行採樣多個輸出,並從中選取一個作為最終結果。然而,迄今為止的研究主要集中在英語及少數領域如數學和程式碼上。相較之下,我們對那些能泛化於開放式任務、可形式化驗證任務及跨語言技術最感興趣。在本研究中,我們探討如何在多語言、多任務的環境下,穩健地擴展開放式生成任務的推理時期計算。 我們的研究發現,基於溫度變化的採樣策略與選擇策略都必須調整,以適應不同的領域和多變的語言環境。我們評估了現有的選擇方法,發現那些在英語中有效的策略往往無法跨語言泛化。我們提出了專為多語言和多任務推理場景設計的新穎採樣與選擇策略,並展示這些策略在跨語言和跨任務中帶來的顯著增益。特別是,我們結合的採樣與選擇方法,使我們的8B模型在m-ArenaHard-v2.0提示上對抗如Gemini等專有模型時,平均勝率提升了+6.8。在更大規模上,配備我們方法的Command-A(111B模型),在相同基準測試中,僅用五個樣本對比單一樣本解碼,勝率提升了+9.0,這是在最小成本下的顯著提升。我們的結果強調了推理時期計算需要語言和任務感知的方法,旨在使性能提升民主化於代表性不足的語言中。
English
Recent advancements in large language models (LLMs) have shifted focus toward scaling inference-time compute, improving performance without retraining the model. A common approach is to sample multiple outputs in parallel, and select one of these as the final output. However, work to date has focused on English and a handful of domains such as math and code. In contrast, we are most interested in techniques that generalize across open-ended tasks, formally verifiable tasks, and across languages. In this work, we study how to robustly scale inference-time compute for open-ended generative tasks in a multilingual, multi-task setting. Our findings show that both sampling strategy based on temperature variation and selection strategy must be adapted to account for diverse domains and varied language settings. We evaluate existing selection methods, revealing that strategies effective in English often fail to generalize across languages. We propose novel sampling and selection strategies specifically adapted for multilingual and multi-task inference scenarios, and show they yield notable gains across languages and tasks. In particular, our combined sampling and selection methods lead to an average +6.8 jump in win-rates for our 8B models on m-ArenaHard-v2.0 prompts, against proprietary models such as Gemini. At larger scale, Command-A (111B model) equipped with our methods, shows +9.0 improvement in win-rates on the same benchmark with just five samples against single-sample decoding, a substantial increase at minimal cost. Our results underscore the need for language- and task-aware approaches to inference-time compute, aiming to democratize performance improvements in underrepresented languages.
PDF81June 26, 2025