ChatPaper.aiChatPaper

思維骨架:大型語言模型能夠進行平行解碼

Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding

July 28, 2023
作者: Xuefei Ning, Zinan Lin, Zixuan Zhou, Huazhong Yang, Yu Wang
cs.AI

摘要

本研究旨在降低大型語言模型(LLMs)的端到端生成延遲。高生成延遲的主要原因之一是幾乎所有最先進的LLMs所採用的順序解碼方法。在這項工作中,受到人類思考和寫作過程的啟發,我們提出了“思維骨架”(SoT),該方法引導LLMs首先生成答案的骨架,然後進行並行API調用或批次解碼,以並行方式完成每個骨架點的內容。SoT不僅提供了顯著的加速(在11種不同的LLMs中高達2.39倍),而且還可能在幾個問題類別上提高答案的多樣性和相關性。SoT是針對效率的數據中心優化的初步嘗試,揭示了將LLMs推向更像人類思考以提高答案質量的潛力。
English
This work aims at decreasing the end-to-end generation latency of large language models (LLMs). One of the major causes of the high generation latency is the sequential decoding approach adopted by almost all state-of-the-art LLMs. In this work, motivated by the thinking and writing process of humans, we propose "Skeleton-of-Thought" (SoT), which guides LLMs to first generate the skeleton of the answer, and then conducts parallel API calls or batched decoding to complete the contents of each skeleton point in parallel. Not only does SoT provide considerable speed-up (up to 2.39x across 11 different LLMs), but it can also potentially improve the answer quality on several question categories in terms of diversity and relevance. SoT is an initial attempt at data-centric optimization for efficiency, and reveal the potential of pushing LLMs to think more like a human for answer quality.
PDF382December 15, 2024