全景可供性预测
Panoramic Affordance Prediction
March 16, 2026
作者: Zixin Zhang, Chenfei Liao, Hongfei Zhang, Harold Haodong Chen, Kanghao Chen, Zichen Wen, Litao Guo, Bin Ren, Xu Zheng, Yinchuan Li, Xuming Hu, Nicu Sebe, Ying-Cong Chen
cs.AI
摘要
功能预测在具身人工智能中扮演着感知与行动的关键桥梁。然而现有研究局限于针孔相机模型,其存在视场角狭窄和观测碎片化的问题,常常丢失关键的整体环境上下文。本文首次探索全景功能预测,利用360度图像捕捉全局空间关系与整体场景理解。为推进这一新任务,我们首先提出PAP-12K大规模基准数据集,包含千余张超高分辨率(12k,11904×5952)全景图像,标注逾1.2万个精细问答对与功能掩码。进一步,受人类中央凹视觉系统启发,我们提出无需训练的由粗到精处理框架PAP,以应对全景图像固有的超高分辨率与严重畸变。该框架通过网格提示递归执行视觉路由逐步定位目标,采用自适应注视机制校正局部几何畸变,并利用级联定位管道提取精确的实例级掩码。在PAP-12K上的实验表明,专为标准透视图像设计的功能预测方法因全景视觉的独特挑战出现性能严重退化甚至失效。相比之下,PAP框架有效克服这些障碍,显著超越现有先进基线,彰显全景感知对构建鲁棒具身智能的巨大潜力。
English
Affordance prediction serves as a critical bridge between perception and action in embodied AI. However, existing research is confined to pinhole camera models, which suffer from narrow Fields of View (FoV) and fragmented observations, often missing critical holistic environmental context. In this paper, we present the first exploration into Panoramic Affordance Prediction, utilizing 360-degree imagery to capture global spatial relationships and holistic scene understanding. To facilitate this novel task, we first introduce PAP-12K, a large-scale benchmark dataset containing over 1,000 ultra-high-resolution (12k, 11904 x 5952) panoramic images with over 12k carefully annotated QA pairs and affordance masks. Furthermore, we propose PAP, a training-free, coarse-to-fine pipeline inspired by the human foveal visual system to tackle the ultra-high resolution and severe distortion inherent in panoramic images. PAP employs recursive visual routing via grid prompting to progressively locate targets, applies an adaptive gaze mechanism to rectify local geometric distortions, and utilizes a cascaded grounding pipeline to extract precise instance-level masks. Experimental results on PAP-12K reveal that existing affordance prediction methods designed for standard perspective images suffer severe performance degradation and fail due to the unique challenges of panoramic vision. In contrast, PAP framework effectively overcomes these obstacles, significantly outperforming state-of-the-art baselines and highlighting the immense potential of panoramic perception for robust embodied intelligence.