ChatPaper.aiChatPaper

校準後行動:大型語言模型代理中的成本感知探索

Calibrate-Then-Act: Cost-Aware Exploration in LLM Agents

February 18, 2026
作者: Wenxuan Ding, Nicholas Tomlin, Greg Durrett
cs.AI

摘要

大型語言模型正日益被應用於解決複雜問題,這類問題往往無法透過單次回應完成,而是需要與環境互動以獲取資訊。在此類情境中,大型語言模型必須權衡內在的「成本-不確定性」取捨問題,以決定何時停止探索並給出最終答案。例如在程式設計任務中,若模型對生成程式碼片段的正確性存疑,就應當對其進行測試;編寫測試雖需付出非零成本,但通常低於錯誤代價。本研究提出,可引導大型語言模型顯式地推理這種成本與不確定性的平衡關係,從而實現更優的環境探索策略。我們將資訊檢索與編程等多項任務形式化為不確定性下的序列決策問題,每個問題皆包含可透過先驗分佈傳遞給大型語言模型代理的潛在環境狀態。我們提出「校準後行動」框架,透過為模型提供額外上下文情境,使其能採取更優決策。即使對基準模型與CTA框架同時進行強化學習訓練,此改進效果依然保持。在資訊檢索問答與簡化編程任務上的實驗結果表明,透過CTA顯式權衡成本效益能幫助智能體發現更優的決策策略。
English
LLMs are increasingly being used for complex problems which are not necessarily resolved in a single response, but require interacting with an environment to acquire information. In these scenarios, LLMs must reason about inherent cost-uncertainty tradeoffs in when to stop exploring and commit to an answer. For instance, on a programming task, an LLM should test a generated code snippet if it is uncertain about the correctness of that code; the cost of writing a test is nonzero, but typically lower than the cost of making a mistake. In this work, we show that we can induce LLMs to explicitly reason about balancing these cost-uncertainty tradeoffs, then perform more optimal environment exploration. We formalize multiple tasks, including information retrieval and coding, as sequential decision-making problems under uncertainty. Each problem has latent environment state that can be reasoned about via a prior which is passed to the LLM agent. We introduce a framework called Calibrate-Then-Act (CTA), where we feed the LLM this additional context to enable it to act more optimally. This improvement is preserved even under RL training of both the baseline and CTA. Our results on information-seeking QA and on a simplified coding task show that making cost-benefit tradeoffs explicit with CTA can help agents discover more optimal decision-making strategies.
PDF111February 21, 2026