思緒緩衝區:利用大型語言模型進行思緒增強推理
Buffer of Thoughts: Thought-Augmented Reasoning with Large Language Models
June 6, 2024
作者: Ling Yang, Zhaochen Yu, Tianjun Zhang, Shiyi Cao, Minkai Xu, Wentao Zhang, Joseph E. Gonzalez, Bin Cui
cs.AI
摘要
我們介紹了一種名為「思緒緩衝區」(Buffer of Thoughts,BoT)的新穎且多用途的思緒增強推理方法,用於提升大型語言模型(LLMs)的準確性、效率和韌性。具體而言,我們提出了元緩衝區,用於存儲一系列信息豐富的高層次思緒,即從各種任務的問題解決過程中提煉出的思緒模板。然後,對於每個問題,我們檢索相關的思緒模板,並適應性地將其與具體的推理結構相結合,以進行高效的推理。為了確保可擴展性和穩定性,我們進一步提出了緩衝區管理器,動態更新元緩衝區,從而隨著解決更多任務而增強元緩衝區的容量。我們在10個具有挑戰性的推理密集型任務上進行了大量實驗,並在過去的最先進方法上實珅性能改進:在24點遊戲上提高了11%,在幾何形狀上提高了20%,在一步將軍上提高了51%。進一步的分析表明,我們的BoT具有出色的泛化能力和模型韌性,平均僅需多查詢提示方法(例如,思緒樹/圖)成本的12%。值得注意的是,我們發現我們的Llama3-8B+BoT有潛力超越Llama3-70B模型。我們的項目可在以下網址找到:https://github.com/YangLing0818/buffer-of-thought-llm
English
We introduce Buffer of Thoughts (BoT), a novel and versatile
thought-augmented reasoning approach for enhancing accuracy, efficiency and
robustness of large language models (LLMs). Specifically, we propose
meta-buffer to store a series of informative high-level thoughts, namely
thought-template, distilled from the problem-solving processes across various
tasks. Then for each problem, we retrieve a relevant thought-template and
adaptively instantiate it with specific reasoning structures to conduct
efficient reasoning. To guarantee the scalability and stability, we further
propose buffer-manager to dynamically update the meta-buffer, thus enhancing
the capacity of meta-buffer as more tasks are solved. We conduct extensive
experiments on 10 challenging reasoning-intensive tasks, and achieve
significant performance improvements over previous SOTA methods: 11% on Game of
24, 20% on Geometric Shapes and 51% on Checkmate-in-One. Further analysis
demonstrate the superior generalization ability and model robustness of our
BoT, while requiring only 12% of the cost of multi-query prompting methods
(e.g., tree/graph of thoughts) on average. Notably, we find that our
Llama3-8B+BoT has the potential to surpass Llama3-70B model. Our project is
available at: https://github.com/YangLing0818/buffer-of-thought-llmSummary
AI-Generated Summary