将世界模拟模型植根于现实世界大都市
Grounding World Simulation Models in a Real-World Metropolis
March 16, 2026
作者: Junyoung Seo, Hyunwook Choi, Minkyung Kwon, Jinhyeok Choi, Siyoon Jin, Gayoung Lee, Junho Kim, JoungBin Lee, Geonmo Gu, Dongyoon Han, Sangdoo Yun, Seungryong Kim, Jin-Hwa Kim
cs.AI
摘要
倘若一个世界模拟模型能够渲染的不是虚构环境,而是真实存在的城市呢?现有生成式世界模型通过想象所有内容来合成视觉逼真但人工构建的环境。我们提出首尔世界模型(SWM),这是一个以真实首尔市为基础构建的城市级世界模型。SWM通过检索增强机制,利用邻近街景图像进行自回归视频生成锚定。然而这种设计带来多重挑战:检索参考帧与动态目标场景间存在时序错位、车载摄像头稀疏采集导致的轨迹多样性受限及数据稀疏性。我们通过跨时序配对技术、支持多样化相机轨迹的大规模合成数据集,以及从稀疏街景图像生成连贯训练视频的视角插值流程应对这些挑战。我们还引入虚拟前瞻锚定机制,通过持续将每个视频片段重新锚定至未来位置的检索图像,稳定生成长时序内容。我们在首尔、釜山和安娜堡三座城市将SWM与最新视频世界模型进行对比评估。SWM在生成空间精准、时序连贯的长序列视频方面优于现有方法,这些视频以真实城市环境为基础覆盖数百米轨迹,同时支持多样化相机运动与文本提示的场景变化。
English
What if a world simulation model could render not an imagined environment but a city that actually exists? Prior generative world models synthesize visually plausible yet artificial environments by imagining all content. We present Seoul World Model (SWM), a city-scale world model grounded in the real city of Seoul. SWM anchors autoregressive video generation through retrieval-augmented conditioning on nearby street-view images. However, this design introduces several challenges, including temporal misalignment between retrieved references and the dynamic target scene, limited trajectory diversity and data sparsity from vehicle-mounted captures at sparse intervals. We address these challenges through cross-temporal pairing, a large-scale synthetic dataset enabling diverse camera trajectories, and a view interpolation pipeline that synthesizes coherent training videos from sparse street-view images. We further introduce a Virtual Lookahead Sink to stabilize long-horizon generation by continuously re-grounding each chunk to a retrieved image at a future location. We evaluate SWM against recent video world models across three cities: Seoul, Busan, and Ann Arbor. SWM outperforms existing methods in generating spatially faithful, temporally consistent, long-horizon videos grounded in actual urban environments over trajectories reaching hundreds of meters, while supporting diverse camera movements and text-prompted scenario variations.