ChatPaper.aiChatPaper

PromptBridge:面向大型语言模型的跨模型提示迁移技术

PromptBridge: Cross-Model Prompt Transfer for Large Language Models

December 1, 2025
作者: Yaxuan Wang, Quan Liu, Zhenting Wang, Zichao Li, Wei Wei, Yang Liu, Yujia Bao
cs.AI

摘要

大型语言模型(LLM)是代码生成、数学推理和智能体工作流等应用的技术基石。实际应用中,系统通常通过商业API或开源部署调用LLM,而模型生态(如GPT、Claude、Llama等)正经历快速迭代。这种快速演进迫使开发者需要基于能力、成本、部署约束和隐私等因素频繁切换模型。然而提示词具有显著的模型敏感性:将针对某模型优化的提示词直接迁移至其他模型时,其性能往往远低于针对目标模型专门优化的提示词。我们将这种现象称为模型漂移。通过对多样化LLM配置的实证分析,我们发现模型漂移不仅普遍存在且影响严重。为应对这一挑战,我们提出了PromptBridge——一种免训练框架,可在模型切换时保持提示词有效性,实现无需昂贵逐任务或逐模型重新优化的跨模型提示词迁移。该框架仅需少量对齐任务进行校准:首先通过模型自适应反射式提示词进化(MAP-RPE)技术,经由迭代反射优化与量化评估获得任务与模型专属的最优提示词;利用源模型与目标模型校准后的提示词对,学习跨模型提示词映射关系。在测试阶段(即面对未知任务时),给定源模型提示词,该映射可直接生成目标模型的优化提示词。单智能体与多智能体场景的实验表明,PromptBridge能持续提升下游任务准确率,同时显著降低迁移成本。相关代码即将开源。
English
Large language models (LLMs) underpin applications in code generation, mathematical reasoning, and agent-based workflows. In practice, systems access LLMs via commercial APIs or open-source deployments, and the model landscape (e.g., GPT, Claude, Llama) evolves rapidly. This rapid evolution forces frequent model switches driven by capability, cost, deployment constraints, and privacy. Yet prompts are highly model-sensitive: reusing a prompt engineered for one model on another often yields substantially worse performance than a prompt optimized for the target model. We term this phenomenon Model Drifting. Through extensive empirical analysis across diverse LLM configurations, we show that model drifting is both common and severe. To address this challenge, we introduce PromptBridge, a training-free framework that preserves prompt effectiveness under model switches, enabling cross-model prompt transfer without costly per-task or per-model re-optimization. PromptBridge requires only a small set of alignment tasks for calibration. It first applies Model-Adaptive Reflective Prompt Evolution (MAP-RPE) to obtain task- and model-specific optimal prompts via iterative reflective refinement and quantitative evaluation. Using the resulting calibrated prompt pairs for the source and target models, PromptBridge learns a cross-model prompt mapping. At test time, i.e., for an unseen task, given a source-model prompt, this mapping directly produces an optimized prompt for the target model. Experiments in single-agent and multi-agent settings show that PromptBridge consistently improves downstream accuracy while reducing migration effort. The code will be available soon.
PDF81December 3, 2025