ChatPaper.aiChatPaper

從帶有獎勵反饋的學習中引導大型語言模型:單一策略,多重權衡

Learning to Route LLMs from Bandit Feedback: One Policy, Many Trade-offs

October 8, 2025
作者: Wang Wei, Tiankai Yang, Hongjie Chen, Yue Zhao, Franck Dernoncourt, Ryan A. Rossi, Hoda Eldardiry
cs.AI

摘要

高效利用大型語言模型(LLMs)對於大規模部署至關重要:若無自適應路由,系統要么為強模型支付過高成本,要么面臨弱模型性能不佳的風險。為每個查詢選擇合適的LLM本質上是一個在線決策問題:模型各有所長,價格波動不定,且用戶對準確性和成本的看重程度不一。然而,大多數路由器的訓練都是離線進行的,依賴於所有候選模型的標籤,這一假設在實際部署中並不成立,因為部署時只能觀察到所選模型的結果。我們通過BaRP(基於偏好與Bandit反饋的路由方法)來彌合這一差距,該方法在與部署相同的部分反饋限制下進行訓練,同時支持偏好可調的推理:操作者可以在測試時調整性能/成本權衡,而無需重新訓練。將問題框架化為基於提示特徵和用戶偏好向量的上下文Bandit,我們的方法在訓練期間模擬在線反饋環境,並根據每個新提示調整其路由決策,而非依賴於全信息的離線監督。全面的實驗表明,我們的方法始終比強大的離線路由器至少高出12.46%,比最大的LLM至少高出2.45%,並且對未見任務展現出良好的泛化能力。
English
Efficient use of large language models (LLMs) is critical for deployment at scale: without adaptive routing, systems either overpay for strong models or risk poor performance from weaker ones. Selecting the right LLM for each query is fundamentally an online decision problem: models differ in strengths, prices fluctuate, and users value accuracy and cost differently. Yet most routers are trained offline with labels for all candidate models, an assumption that breaks in deployment, where only the outcome of the chosen model is observed. We bridge this gap with BaRP, a Bandit-feedback Routing with Preferences approach that trains under the same partial-feedback restriction as deployment, while supporting preference-tunable inference: operators can dial the performance/cost trade-off at test time without retraining. Framed as a contextual bandit over prompt features and a user preference vector, our method simulates an online feedback setting during training and adapts its routing decisions to each new prompt, rather than depending on full-information offline supervision. Comprehensive experiments show that our method consistently outperforms strong offline routers by at least 12.46% and the largest LLM by at least 2.45%, and generalizes robustly for unseen tasks.
PDF32October 10, 2025