ChatPaper.aiChatPaper

标点符号的重要性:大规模对比大语言模型提示鲁棒性方法

When Punctuation Matters: A Large-Scale Comparison of Prompt Robustness Methods for LLMs

August 15, 2025
作者: Mikhail Seleznyov, Mikhail Chaichuk, Gleb Ershov, Alexander Panchenko, Elena Tutubalina, Oleg Somov
cs.AI

摘要

大型语言模型(LLMs)对提示词措辞和格式的细微、非语义变化极为敏感。在本研究中,我们首次在统一实验框架下系统评估了五种提升提示鲁棒性的方法。我们基于Llama、Qwen和Gemma系列中的8个模型,在自然指令数据集的52项任务上对这些技术进行了基准测试。评估涵盖了微调和上下文学习两种范式下的鲁棒性方法,并测试了它们对多种分布偏移的泛化能力。最后,我们将分析扩展至GPT-4.1和DeepSeek V3,以评估前沿模型当前对格式扰动的鲁棒性。我们的研究结果为这些鲁棒性方法的相对有效性提供了可操作的见解,使实践者在追求现实应用中稳定可靠的大型语言模型性能时能够做出明智决策。代码详见:https://github.com/AIRI-Institute/when-punctuation-matters。
English
Large Language Models (LLMs) are highly sensitive to subtle, non-semantic variations in prompt phrasing and formatting. In this work, we present the first systematic evaluation of 5 methods for improving prompt robustness within a unified experimental framework. We benchmark these techniques on 8 models from Llama, Qwen and Gemma families across 52 tasks from Natural Instructions dataset. Our evaluation covers robustness methods from both fine-tuned and in-context learning paradigms, and tests their generalization against multiple types of distribution shifts. Finally, we extend our analysis to GPT-4.1 and DeepSeek V3 to assess frontier models' current robustness to format perturbations. Our findings offer actionable insights into the relative effectiveness of these robustness methods, enabling practitioners to make informed decisions when aiming for stable and reliable LLM performance in real-world applications. Code: https://github.com/AIRI-Institute/when-punctuation-matters.
PDF402August 19, 2025