ChatPaper.aiChatPaper

PromptBridge:面向大语言模型的跨模型提示迁移技术

PromptBridge: Cross-Model Prompt Transfer for Large Language Models

December 1, 2025
作者: Yaxuan Wang, Quan Liu, Zhenting Wang, Zichao Li, Wei Wei, Yang Liu, Yujia Bao
cs.AI

摘要

大型语言模型(LLM)是代码生成、数学推理和智能体工作流等应用的技术基石。实际应用中,系统通常通过商业API或开源部署调用LLM,而模型生态(如GPT、Claude、Llama)正经历快速迭代。这种快速演进迫使系统频繁切换模型,其动因包括能力差异、成本考量、部署限制和隐私要求。然而提示词具有显著的模型敏感性:将为某模型优化的提示词直接迁移至其他模型时,其性能往往远低于针对目标模型专门优化的提示词。我们将此现象称为模型漂移。通过对多种LLM配置的广泛实证分析,我们发现模型漂移现象既普遍又严重。为应对这一挑战,我们提出PromptBridge框架——一种无需训练的方法,可在模型切换时保持提示词有效性,实现跨模型提示词迁移而无需针对每个任务或模型进行昂贵的重新优化。该框架仅需少量对齐任务进行校准:首先通过模型自适应反射式提示进化(MAP-RPE)技术,经由迭代式反射优化与量化评估,获取任务与模型特定的最优提示词;利用由此得到的源模型与目标模型校准提示词对,学习跨模型提示词映射关系。在测试阶段(即面对新任务时),给定源模型提示词,该映射可直接生成目标模型的优化提示词。单智能体与多智能体场景的实验表明,PromptBridge能持续提升下游任务准确率,同时显著降低迁移成本。相关代码即将开源。
English
Large language models (LLMs) underpin applications in code generation, mathematical reasoning, and agent-based workflows. In practice, systems access LLMs via commercial APIs or open-source deployments, and the model landscape (e.g., GPT, Claude, Llama) evolves rapidly. This rapid evolution forces frequent model switches driven by capability, cost, deployment constraints, and privacy. Yet prompts are highly model-sensitive: reusing a prompt engineered for one model on another often yields substantially worse performance than a prompt optimized for the target model. We term this phenomenon Model Drifting. Through extensive empirical analysis across diverse LLM configurations, we show that model drifting is both common and severe. To address this challenge, we introduce PromptBridge, a training-free framework that preserves prompt effectiveness under model switches, enabling cross-model prompt transfer without costly per-task or per-model re-optimization. PromptBridge requires only a small set of alignment tasks for calibration. It first applies Model-Adaptive Reflective Prompt Evolution (MAP-RPE) to obtain task- and model-specific optimal prompts via iterative reflective refinement and quantitative evaluation. Using the resulting calibrated prompt pairs for the source and target models, PromptBridge learns a cross-model prompt mapping. At test time, i.e., for an unseen task, given a source-model prompt, this mapping directly produces an optimized prompt for the target model. Experiments in single-agent and multi-agent settings show that PromptBridge consistently improves downstream accuracy while reducing migration effort. The code will be available soon.
PDF81December 3, 2025