TokenTrim:面向自回归长视频生成的推理时令牌剪枝技术
TokenTrim: Inference-Time Token Pruning for Autoregressive Long Video Generation
January 30, 2026
作者: Ariel Shaulov, Eitan Shaar, Amit Edenzon, Lior Wolf
cs.AI
摘要
自回归视频生成技术通过将新生成的帧序列迭代地基于先前生成内容进行条件化,实现了长视频合成。然而近期研究表明,此类流程存在严重的时间漂移问题——误差会随时间推移不断累积放大。我们提出假设:这种漂移现象主要并非源于模型容量不足,而是由推理过程中的误差传播导致。具体而言,我们认为漂移源于自回归推理过程中被污染潜变量条件标记的不可控重复使用。为纠正这种误差累积,我们提出一种简单的推理阶段修正方法,通过在重复使用条件标记前识别并移除不稳定潜变量标记来抑制时间漂移。我们将不稳定标记定义为表征向量与前一生成批次显著偏离的潜变量标记,这种偏离暗示着可能存在数据污染或语义漂移。通过从自回归上下文中显式剔除受损潜变量标记(而非修改整个空间区域或模型参数),本方法能阻止不可靠的潜在信息影响后续生成步骤。实验表明,该方法无需修改模型架构、训练流程或脱离潜空间,即可显著提升长序列生成的时间一致性。
English
Auto-regressive video generation enables long video synthesis by iteratively conditioning each new batch of frames on previously generated content. However, recent work has shown that such pipelines suffer from severe temporal drift, where errors accumulate and amplify over long horizons. We hypothesize that this drift does not primarily stem from insufficient model capacity, but rather from inference-time error propagation. Specifically, we contend that drift arises from the uncontrolled reuse of corrupted latent conditioning tokens during auto-regressive inference. To correct this accumulation of errors, we propose a simple, inference-time method that mitigates temporal drift by identifying and removing unstable latent tokens before they are reused for conditioning. For this purpose, we define unstable tokens as latent tokens whose representations deviate significantly from those of the previously generated batch, indicating potential corruption or semantic drift. By explicitly removing corrupted latent tokens from the auto-regressive context, rather than modifying entire spatial regions or model parameters, our method prevents unreliable latent information from influencing future generation steps. As a result, it significantly improves long-horizon temporal consistency without modifying the model architecture, training procedure, or leaving latent space.