ChatPaper.aiChatPaper

當標點符號至關重要:大規模比較大型語言模型的提示魯棒性方法

When Punctuation Matters: A Large-Scale Comparison of Prompt Robustness Methods for LLMs

August 15, 2025
作者: Mikhail Seleznyov, Mikhail Chaichuk, Gleb Ershov, Alexander Panchenko, Elena Tutubalina, Oleg Somov
cs.AI

摘要

大型語言模型(LLMs)對於提示詞語句和格式的細微、非語義變化極為敏感。在本研究中,我們首次在一個統一的實驗框架內,系統性地評估了五種提升提示詞魯棒性的方法。我們在來自Llama、Qwen和Gemma家族的八個模型上,基於自然指令數據集的五十二項任務進行了基準測試。我們的評估涵蓋了從微調到上下文學習範式下的多種魯棒性方法,並測試了它們對抗多種類型分佈偏移的泛化能力。最後,我們將分析擴展至GPT-4.1和DeepSeek V3,以評估前沿模型當前對格式擾動的魯棒性。我們的研究結果為這些魯棒性方法的相對有效性提供了可操作的見解,使實踐者在追求現實應用中穩定可靠的LLM性能時,能夠做出明智的決策。代碼請見:https://github.com/AIRI-Institute/when-punctuation-matters。
English
Large Language Models (LLMs) are highly sensitive to subtle, non-semantic variations in prompt phrasing and formatting. In this work, we present the first systematic evaluation of 5 methods for improving prompt robustness within a unified experimental framework. We benchmark these techniques on 8 models from Llama, Qwen and Gemma families across 52 tasks from Natural Instructions dataset. Our evaluation covers robustness methods from both fine-tuned and in-context learning paradigms, and tests their generalization against multiple types of distribution shifts. Finally, we extend our analysis to GPT-4.1 and DeepSeek V3 to assess frontier models' current robustness to format perturbations. Our findings offer actionable insights into the relative effectiveness of these robustness methods, enabling practitioners to make informed decisions when aiming for stable and reliable LLM performance in real-world applications. Code: https://github.com/AIRI-Institute/when-punctuation-matters.
PDF322August 19, 2025