ChatPaper.aiChatPaper

思緒的紐帶:揭開混沌情境

Thread of Thought Unraveling Chaotic Contexts

November 15, 2023
作者: Yucheng Zhou, Xiubo Geng, Tao Shen, Chongyang Tao, Guodong Long, Jian-Guang Lou, Jianbing Shen
cs.AI

摘要

大型語言模型(LLMs)已經引領了自然語言處理領域的一個轉型時代,在與文本理解和生成相關的任務中表現出色。然而,當面對混亂的情境(例如,分心因素而非長篇無關的內容)時,它們會遇到困難,導致在混亂的情境中無意中遺漏某些細節。為應對這些挑戰,我們引入了“思維之線”(ThoT)策略,該策略靈感來自人類的認知過程。ThoT系統地對擴展情境進行分段和分析,同時熟練地選擇相關信息。這一策略作為一個多功能的“即插即用”模組,能夠無縫地與各種LLMs和提示技術集成。在實驗中,我們利用了PopQA和EntityQ數據集,以及我們收集的多輪對話回應數據集(MTCR),以說明與其他提示技術相比,ThoT顯著改善了推理性能。
English
Large Language Models (LLMs) have ushered in a transformative era in the field of natural language processing, excelling in tasks related to text comprehension and generation. Nevertheless, they encounter difficulties when confronted with chaotic contexts (e.g., distractors rather than long irrelevant context), leading to the inadvertent omission of certain details within the chaotic context. In response to these challenges, we introduce the "Thread of Thought" (ThoT) strategy, which draws inspiration from human cognitive processes. ThoT systematically segments and analyzes extended contexts while adeptly selecting pertinent information. This strategy serves as a versatile "plug-and-play" module, seamlessly integrating with various LLMs and prompting techniques. In the experiments, we utilize the PopQA and EntityQ datasets, as well as a Multi-Turn Conversation Response dataset (MTCR) we collected, to illustrate that ThoT significantly improves reasoning performance compared to other prompting techniques.
PDF71December 15, 2024