ThinkJEPA:以大型视觉语言推理模型赋能潜在世界模型
ThinkJEPA: Empowering Latent World Models with Large Vision-Language Reasoning Model
March 23, 2026
作者: Haichao Zhang, Yijiang Li, Shwai He, Tushar Nagarajan, Mingfei Chen, Jianglin Lu, Ang Li, Yun Fu
cs.AI
摘要
潜在世界模型(如V-JEPA2)的最新进展已展现出从视频观测中预测未来世界状态的潜力。然而,基于短时观测窗口的密集预测会限制时序上下文,使预测器偏向局部低层次外推,难以捕捉长时程语义并降低下游任务效用。相比之下,视觉语言模型(VLMs)通过对均匀采样帧进行推理,能提供强语义基础和通用知识,但由于计算驱动的稀疏采样、将细粒度交互状态压缩为文本导向表征的语言输出瓶颈,以及适配小规模动作条件数据集时的数据机制失配,它们并不适合作为独立的密集预测器。我们提出一种VLM引导的JEMA式潜在世界建模框架,通过双时序路径结合密集帧动态建模与长时程语义引导:密集JEMA分支负责细粒度运动和交互线索,而采用较大时序步长的均匀采样VLM思考分支则提供知识丰富的引导。为有效传递VLM的渐进推理信号,我们引入分层金字塔表征提取模块,将VLM的多层表征聚合为与潜在预测兼容的引导特征。手部操作轨迹预测实验表明,本方法在强VLM基线与JEMA预测器基线上均取得更优性能,并产生更稳健的长时程推演行为。
English
Recent progress in latent world models (e.g., V-JEPA2) has shown promising capability in forecasting future world states from video observations. Nevertheless, dense prediction from a short observation window limits temporal context and can bias predictors toward local, low-level extrapolation, making it difficult to capture long-horizon semantics and reducing downstream utility. Vision--language models (VLMs), in contrast, provide strong semantic grounding and general knowledge by reasoning over uniformly sampled frames, but they are not ideal as standalone dense predictors due to compute-driven sparse sampling, a language-output bottleneck that compresses fine-grained interaction states into text-oriented representations, and a data-regime mismatch when adapting to small action-conditioned datasets. We propose a VLM-guided JEPA-style latent world modeling framework that combines dense-frame dynamics modeling with long-horizon semantic guidance via a dual-temporal pathway: a dense JEPA branch for fine-grained motion and interaction cues, and a uniformly sampled VLM thinker branch with a larger temporal stride for knowledge-rich guidance. To transfer the VLM's progressive reasoning signals effectively, we introduce a hierarchical pyramid representation extraction module that aggregates multi-layer VLM representations into guidance features compatible with latent prediction. Experiments on hand-manipulation trajectory prediction show that our method outperforms both a strong VLM-only baseline and a JEPA-predictor baseline, and yields more robust long-horizon rollout behavior.