OmniZip:音頻引導的動態令牌壓縮技術——加速全模態大型語言模型的新方法
OmniZip: Audio-Guided Dynamic Token Compression for Fast Omnimodal Large Language Models
November 18, 2025
作者: Keda Tao, Kele Shao, Bohan Yu, Weiqiang Wang, Jian liu, Huan Wang
cs.AI
摘要
近期,全模態大型語言模型(OmniLLMs)在統一音視頻理解領域日益受到研究關注,然而處理音視頻標記序列會產生顯著的計算瓶頸。現有的標記壓縮方法尚未滿足這種新興的聯合壓縮多模態標記的需求。為此,我們提出OmniZip——一種免訓練、音頻引導的音視頻標記壓縮框架,可優化多模態標記表徵並加速推理。具體而言,OmniZip首先識別顯著音頻標記,隨後為每個時間組計算音頻保留分數以捕捉信息密度,從而動態指導視頻標記剪枝,並保留通過跨模態相似性增強的音頻錨點線索。針對每個時間窗口,該框架採用交錯時空方案壓縮視頻標記。大量實證結果表明,OmniZip在保持性能且無需訓練的前提下,相比其他頂尖方案可實現3.42倍推理加速和1.4倍記憶體減耗。
English
Omnimodal large language models (OmniLLMs) have attracted increasing research attention of late towards unified audio-video understanding, wherein processing audio-video token sequences creates a significant computational bottleneck, however. Existing token compression methods have yet to accommodate this emerging need of jointly compressing multimodal tokens. To bridge this gap, we present OmniZip, a training-free, audio-guided audio-visual token-compression framework that optimizes multimodal token representation and accelerates inference. Specifically, OmniZip first identifies salient audio tokens, then computes an audio retention score for each time group to capture information density, thereby dynamically guiding video token pruning and preserving cues from audio anchors enhanced by cross-modal similarity. For each time window, OmniZip compresses the video tokens using an interleaved spatio-temporal scheme. Extensive empirical results demonstrate the merits of OmniZip - it achieves 3.42X inference speedup and 1.4X memory reduction over other top-performing counterparts, while maintaining performance with no training.