ChatPaper.aiChatPaper

基于图结构的思维链剪枝:减少推理大语言模型中的冗余反思

Graph-Based Chain-of-Thought Pruning for Reducing Redundant Reflections in Reasoning LLMs

April 7, 2026
作者: Hongyuan Yuan, Xinran He, Run Shao, Bolei He, Xianwei Xue, Mengke Chen, Qiutong Pan, Haiwei Wang, Haifeng Li
cs.AI

摘要

透過強化學習擴展思維鏈技術已被廣泛應用於提升大型語言模型的推理能力。然而由於獎勵信號的稀疏性,這種方法也可能引發不良的思考模式,例如過度思考現象——即生成冗餘的中間推理內容。本研究指出,此類冗餘的主要根源在於低效反思,具體表現為兩種問題模式:其一是無差別反思,即模型在推理過程中進行廣泛但影響力低的檢驗;其二是重複性反思,即對已確立結論進行反覆驗證。為解決此問題,我們提出基於圖結構的思維鏈優化框架。具體而言,我們將線性思維鏈轉換為具有顯式依賴邊的有向無環圖,並設計雙重剪枝策略:分支級剪枝移除貢獻度低的反思分支,深度級剪枝則消除後期重複驗證。我們通過三階段流程蒸餾此行為:(1)使用監督微調在剪枝後的簡潔軌跡上初始化策略;(2)應用直接偏好優化來選擇正確但更簡潔的推理路徑;(3)結合長度懲罰的組內相對策略優化,共同優化答案正確性與效率。實驗表明,我們的方法在保持或提升準確率的同時,能將平均推理標記數量減少42%。
English
Extending CoT through RL has been widely used to enhance the reasoning capabilities of LLMs. However, due to the sparsity of reward signals, it can also induce undesirable thinking patterns such as overthinking, i.e., generating redundant intermediate reasoning content. In this work, we argue that a major source of such redundancy is inefficient reflection, which often manifests in two problematic patterns: Indiscriminate Reflection, where the model performs broad, low-impact checks throughout reasoning, and Repetitive Reflection, where it repeatedly re-verifies an already established conclusion. To address this, we introduce a graph-based CoT optimization framework. Specifically, we convert each linear CoT into a directed acyclic graph (DAG) with explicit dependency edges, and design a dual pruning strategy: branch-level pruning removes weakly contributing reflection branches, while depth-level pruning eliminates late-stage re-verification. We distill this behavior via a three-stage pipeline: (1) SFT to initialize the policy on pruned concise traces, (2) DPO to prefer correct but less redundant trajectories, and (3) GRPO with length penalty to jointly optimize answer correctness and efficiency. Experiments show that our approach reduces the average reasoning tokens by 42\% while maintaining or improving accuracy.
PDF61April 10, 2026