从标记到行动:状态机推理缓解信息检索中的过度思考
From Token to Action: State Machine Reasoning to Mitigate Overthinking in Information Retrieval
May 29, 2025
作者: Dohyeon Lee, Yeonseok Jeong, Seung-won Hwang
cs.AI
摘要
链式思维(CoT)提示法能够激发大型语言模型(LLMs)进行复杂推理,包括在信息检索(IR)领域的应用。然而,这种方法常导致模型过度思考,产生冗长且语义重复的推理轨迹,却收效甚微。我们识别出IR中的两大挑战:一是重复访问相似状态的冗余路径,二是偏离用户意图的错误推理。为解决这些问题,我们提出了状态机推理(SMR),这是一种基于状态转移的推理框架,由离散动作(精炼、重排、停止)构成,支持早期停止和精细控制。在BEIR和BRIGHT基准测试上的实验表明,SMR将检索性能(nDCG@10)提升了3.4%,同时减少了74.4%的token使用量。SMR无需针对特定任务进行调整,即可跨LLMs和检索器泛化,为传统CoT推理提供了一种实用的替代方案。代码及详细信息请访问https://github.com/ldilab/SMR。
English
Chain-of-Thought (CoT) prompting enables complex reasoning in large language
models (LLMs), including applications in information retrieval (IR). However,
it often leads to overthinking, where models produce excessively long and
semantically redundant traces with little or no benefit. We identify two key
challenges in IR: redundant trajectories that revisit similar states and
misguided reasoning that diverges from user intent. To address these, we
propose State Machine Reasoning (SMR), a transition-based reasoning framework
composed of discrete actions (Refine, Rerank, Stop) that support early stopping
and fine-grained control. Experiments on the BEIR and BRIGHT benchmarks show
that SMR improves retrieval performance (nDCG@10) by 3.4% while reducing token
usage by 74.4%. It generalizes across LLMs and retrievers without requiring
task-specific tuning, offering a practical alternative to conventional CoT
reasoning. The code and details are available at https://github.com/ldilab/SMR.