ChatPaper.aiChatPaper

重新思考JEPA:基于冻结教师模型的计算高效视频自监督学习

Rethinking JEPA: Compute-Efficient Video SSL with Frozen Teachers

September 29, 2025
作者: Xianhang Li, Chen Huang, Chun-Liang Li, Eran Malach, Josh Susskind, Vimal Thilak, Etai Littwin
cs.AI

摘要

视频联合嵌入预测架构(V-JEPA)通过利用指数移动平均(EMA)更新的教师模型预测潜在空间中的掩码区域,学习可泛化的现成视频表示。虽然EMA防止了表示崩溃,但它使模型选择的可扩展性复杂化,并将教师和学生架构紧密耦合。我们重新审视了掩码潜在预测,并证明冻结的教师模型已足够。具体而言,我们(i)在V-JEPA掩码下,以简单的像素重建目标训练目标编码器,然后(ii)冻结该编码器,并训练学生模型以预测教师模型在掩码区域的潜在表示。这形成了一种两阶段、无正则化的方案,我们称之为SALT(静态教师非对称潜在训练)。SALT将优化解耦为像素重建(教师)和掩码潜在预测(学生),提高了透明度、效率和可扩展性,同时保持了表示在冻结评估下的泛化能力。实验表明,在冻结骨干评估下,我们的学生模型在多种基准测试中超越了最近提出的V-JEPA 2编码器。它们还更具计算效率:在相同的预训练浮点运算(FLOPs)下,我们的方法实现了更高的探测准确率,其扩展曲线主导了V-JEPA的准确率-FLOPs帕累托前沿。最后,我们发现学生模型的质量对教师模型的质量表现出显著的鲁棒性:即使使用小型、次优的教师模型,也能涌现出高性能的学生模型。这指向了一种应极大倾向于学生模型的计算预算分配。这些结果表明,SALT作为一种简单、可扩展且计算高效的替代方案,适用于基于EMA的自蒸馏视频表示学习。
English
Video Joint Embedding Predictive Architectures (V-JEPA) learn generalizable off-the-shelf video representation by predicting masked regions in latent space with an exponential moving average (EMA)-updated teacher. While EMA prevents representation collapse, it complicates scalable model selection and couples teacher and student architectures. We revisit masked-latent prediction and show that a frozen teacher suffices. Concretely, we (i) train a target encoder with a simple pixel-reconstruction objective under V-JEPA masking, then (ii) freeze it and train a student to predict the teacher's latents on masked regions. This leads to a two-stage, unregularized scheme that we refer to as SALT (Static-teacher Asymmetric Latent Training). SALT decouples optimization into pixel reconstruction (teacher) and masked latent prediction (student), increasing transparency, efficiency, and scalability while preserving the ability of representation to generalize under frozen evaluation. Empirically, our student models outperform recently proposed V-JEPA 2 encoders under frozen backbone evaluation across diverse benchmarks. They are also more compute-optimal: at matched pretraining FLOPs, our method achieves higher probing accuracy, and its scaling curves dominate V-JEPA's accuracy-FLOPs Pareto frontier. Finally, we find that student quality is remarkably robust to teacher quality: high-performing students emerge even with small, sub-optimal teachers. This points to a compute budget allocation that should overwhelmingly favor the student. These results position SALT as a simple, scalable, and compute-efficient alternative to EMA-based self-distillation for video representation learning.
PDF41September 30, 2025