ChatPaper.aiChatPaper

代幣精簡應超越生成模型效率之考量——從視覺、語言到多模態的視角

Token Reduction Should Go Beyond Efficiency in Generative Models -- From Vision, Language to Multimodality

May 23, 2025
作者: Zhenglun Kong, Yize Li, Fanhu Zeng, Lei Xin, Shvat Messica, Xue Lin, Pu Zhao, Manolis Kellis, Hao Tang, Marinka Zitnik
cs.AI

摘要

在Transformer架構中,從原始數據中提取的離散單元——即所謂的“token”——是通過將輸入分割成固定長度的塊來形成的。每個token隨後被映射到一個嵌入向量中,從而實現並行的注意力計算,同時保留輸入的核心信息。由於Transformer自注意力機制的計算複雜度呈二次方增長,token削減主要被用作一種效率策略。這在單一視覺和語言領域尤為明顯,它有助於平衡計算成本、內存使用和推理延遲。儘管取得了這些進展,本文主張,在大型生成模型的時代,token削減應超越其傳統的以效率為導向的角色。相反,我們將其定位為生成建模中的一項基本原則,對模型架構及更廣泛的應用具有關鍵影響。具體而言,我們認為,在視覺、語言及多模態系統中,token削減能夠:(i)促進更深層次的多模態整合與對齊,(ii)緩解“過度思考”和幻覺現象,(iii)在長輸入上保持連貫性,以及(iv)增強訓練穩定性等。我們重新定義了token削減,使其不僅僅是一種效率措施。通過這樣做,我們勾勒出未來的研究方向,包括算法設計、強化學習引導的token削減、針對上下文學習的token優化,以及更廣泛的機器學習和科學領域。我們強調了其在推動新模型架構和學習策略方面的潛力,這些策略能夠提升模型的魯棒性、增加可解釋性,並更好地與生成建模的目標保持一致。
English
In Transformer architectures, tokens\textemdash discrete units derived from raw data\textemdash are formed by segmenting inputs into fixed-length chunks. Each token is then mapped to an embedding, enabling parallel attention computations while preserving the input's essential information. Due to the quadratic computational complexity of transformer self-attention mechanisms, token reduction has primarily been used as an efficiency strategy. This is especially true in single vision and language domains, where it helps balance computational costs, memory usage, and inference latency. Despite these advances, this paper argues that token reduction should transcend its traditional efficiency-oriented role in the era of large generative models. Instead, we position it as a fundamental principle in generative modeling, critically influencing both model architecture and broader applications. Specifically, we contend that across vision, language, and multimodal systems, token reduction can: (i) facilitate deeper multimodal integration and alignment, (ii) mitigate "overthinking" and hallucinations, (iii) maintain coherence over long inputs, and (iv) enhance training stability, etc. We reframe token reduction as more than an efficiency measure. By doing so, we outline promising future directions, including algorithm design, reinforcement learning-guided token reduction, token optimization for in-context learning, and broader ML and scientific domains. We highlight its potential to drive new model architectures and learning strategies that improve robustness, increase interpretability, and better align with the objectives of generative modeling.

Summary

AI-Generated Summary

PDF143May 29, 2025