几何感知旋转位置嵌入:构建一致视频世界模型
Geometry-Aware Rotary Position Embedding for Consistent Video World Model
February 8, 2026
作者: Chendong Xiang, Jiajun Liu, Jintao Zhang, Xiao Yang, Zhengwei Fang, Shizun Wang, Zijun Wang, Yingtian Zou, Hang Su, Jun Zhu
cs.AI
摘要
具备显式相机控制能力的预测性世界模型是交互式人工智能的基础。尽管发展迅速,当前系统仍缺乏空间持久性:它们无法在长轨迹中保持稳定的场景结构,当相机重新访问已观测区域时频繁出现细节幻觉。我们发现这种几何漂移源于对屏幕空间位置编码的依赖,这与三维一致性所需的投影几何相冲突。本文提出ViewRope——一种几何感知编码技术,将相机光线方向直接注入视频Transformer的自注意力层。通过采用相对射线几何而非像素局部性来参数化注意力机制,ViewRope为跨时间间隙检索三维一致内容提供了模型原生的归纳偏置。我们进一步提出几何感知帧稀疏注意力机制,利用这些几何线索选择性地关注相关历史帧,在保持记忆一致性的同时提升效率。此外还推出ViewBench诊断套件,用于评估闭环保真度与几何漂移。实验结果表明,ViewRope在显著提升长期一致性的同时有效降低了计算成本。
English
Predictive world models that simulate future observations under explicit camera control are fundamental to interactive AI. Despite rapid advances, current systems lack spatial persistence: they fail to maintain stable scene structures over long trajectories, frequently hallucinating details when cameras revisit previously observed locations. We identify that this geometric drift stems from reliance on screen-space positional embeddings, which conflict with the projective geometry required for 3D consistency. We introduce ViewRope, a geometry-aware encoding that injects camera-ray directions directly into video transformer self-attention layers. By parameterizing attention with relative ray geometry rather than pixel locality, ViewRope provides a model-native inductive bias for retrieving 3D-consistent content across temporal gaps. We further propose Geometry-Aware Frame-Sparse Attention, which exploits these geometric cues to selectively attend to relevant historical frames, improving efficiency without sacrificing memory consistency. We also present ViewBench, a diagnostic suite measuring loop-closure fidelity and geometric drift. Our results demonstrate that ViewRope substantially improves long-term consistency while reducing computational costs.