使用連續概念進行LLM預訓練
LLM Pretraining with Continuous Concepts
February 12, 2025
作者: Jihoon Tack, Jack Lanchantin, Jane Yu, Andrew Cohen, Ilia Kulikov, Janice Lan, Shibo Hao, Yuandong Tian, Jason Weston, Xian Li
cs.AI
摘要
下一個標記預測一直是大型語言模型預訓練中使用的標準訓練目標。代表性是通過優化標記級困惑度而學習的表示。我們提出了連續概念混合(CoCoMix),這是一種結合離散下一個標記預測和連續概念的新型預訓練框架。具體來說,CoCoMix預測從預訓練的稀疏自編碼器學習的連續概念,並將它們與模型的隱藏狀態混合,通過與標記隱藏表示交替進行。通過在多個基準測試中進行實驗,包括語言建模和下游推理任務,我們展示了CoCoMix比標準的下一個標記預測、知識蒸餾和插入暫停標記更具樣本效率,並且穩定地表現更好。我們發現結合概念學習和交替在端到端框架中對性能提升至關重要。此外,CoCoMix通過允許直接檢查和修改預測概念來增強可解釋性和可控性,提供了引導模型內部推理過程的透明方式。
English
Next token prediction has been the standard training objective used in large
language model pretraining. Representations are learned as a result of
optimizing for token-level perplexity. We propose Continuous Concept Mixing
(CoCoMix), a novel pretraining framework that combines discrete next token
prediction with continuous concepts. Specifically, CoCoMix predicts continuous
concepts learned from a pretrained sparse autoencoder and mixes them into the
model's hidden state by interleaving with token hidden representations. Through
experiments on multiple benchmarks, including language modeling and downstream
reasoning tasks, we show that CoCoMix is more sample efficient and consistently
outperforms standard next token prediction, knowledge distillation and
inserting pause tokens. We find that combining both concept learning and
interleaving in an end-to-end framework is critical to performance gains.
Furthermore, CoCoMix enhances interpretability and steerability by allowing
direct inspection and modification of the predicted concept, offering a
transparent way to guide the model's internal reasoning process.Summary
AI-Generated Summary