在神經壓縮文本上訓練LLMs
Training LLMs over Neurally Compressed Text
April 4, 2024
作者: Brian Lester, Jaehoon Lee, Alex Alemi, Jeffrey Pennington, Adam Roberts, Jascha Sohl-Dickstein, Noah Constant
cs.AI
摘要
本文探討在高度壓縮文本上訓練大型語言模型(LLMs)的概念。傳統的子詞分詞器通過輕微壓縮文本,而神經文本壓縮器可以實現更高比率的壓縮。如果能夠直接在神經壓縮文本上訓練LLMs,將在訓練和服務效率上帶來優勢,並更容易處理長文本範圍。實現這一目標的主要障礙是,強壓縮往往會產生不適合學習的不透明輸出。特別是,我們發現通過算術編碼天真地壓縮的文本對LLMs來說不容易學習。為了克服這一問題,我們提出了Equal-Info Windows,一種新穎的壓縮技術,其中文本被分割成每個壓縮到相同位長的區塊。使用這種方法,我們展示了在神經壓縮文本上的有效學習,隨著規模的擴大而改善,並在困惑度和推理速度基準上遠遠優於字節級基準。雖然我們的方法在具有相同參數計數的模型上交付的困惑度比子詞分詞器差,但它具有較短的序列長度的好處。較短的序列長度需要較少的自回歸生成步驟,並減少延遲。最後,我們對有助於可學習性的特性進行了廣泛分析,並提出了如何進一步改善高壓縮分詞器性能的具體建議。
English
In this paper, we explore the idea of training large language models (LLMs)
over highly compressed text. While standard subword tokenizers compress text by
a small factor, neural text compressors can achieve much higher rates of
compression. If it were possible to train LLMs directly over neurally
compressed text, this would confer advantages in training and serving
efficiency, as well as easier handling of long text spans. The main obstacle to
this goal is that strong compression tends to produce opaque outputs that are
not well-suited for learning. In particular, we find that text na\"ively
compressed via Arithmetic Coding is not readily learnable by LLMs. To overcome
this, we propose Equal-Info Windows, a novel compression technique whereby text
is segmented into blocks that each compress to the same bit length. Using this
method, we demonstrate effective learning over neurally compressed text that
improves with scale, and outperforms byte-level baselines by a wide margin on
perplexity and inference speed benchmarks. While our method delivers worse
perplexity than subword tokenizers for models trained with the same parameter
count, it has the benefit of shorter sequence lengths. Shorter sequence lengths
require fewer autoregressive generation steps, and reduce latency. Finally, we
provide extensive analysis of the properties that contribute to learnability,
and offer concrete suggestions for how to further improve the performance of
high-compression tokenizers.Summary
AI-Generated Summary