像素世界:朝向將一切視為像素的方向前進
PixelWorld: Towards Perceiving Everything as Pixels
January 31, 2025
作者: Zhiheng Lyu, Xueguang Ma, Wenhu Chen
cs.AI
摘要
現有的基礎模型通常將視覺輸入處理為像素,將文本輸入處理為標記,這種範式與人類感知形成對比,人類感知會統一處理這兩種模態。隨著具身和主動式人工智慧的興起,其中的輸入主要來自相機像素,統一感知框架的需求變得日益明顯。在本文中,我們提議將所有模態(文本、表格、代碼、圖表、圖像等)統一為像素輸入,即“將所有事物視為像素”(PEAP)。我們引入了PixelWorld,一個新穎的評估套件,將所有提到的模態統一到像素空間中,以評估現有模型的性能。我們的研究結果顯示:(1)在多模態數據集中,PEAP在性能上優於基於標記輸入的基線,受益於統一輸入以獲得更好的消歧能力;(2)當處理基於像素的輸入時,所有模型的推理和編碼能力顯著下降,強調了增強基礎模型感知能力的必要性;(3)較大的模型可以在PEAP下保持在非推理任務上的強勁表現,而像Phi-3.5-V這樣的較小模型則會遭受顯著的性能下降;(4)PEAP的注意模式與文本標記輸入高度一致;(5)通過利用空間稀疏性,PEAP可以顯著加速。我們得出結論,現有的前沿模型在像素感知方面表現出色,但仍有改進的空間。我們的代碼、數據集將在接受後公開發布。
English
Existing foundation models typically process visual input as pixels and
textual input as tokens, a paradigm that contrasts with human perception, where
both modalities are processed in a unified manner. With the rise of embodied
and agentic AI, where inputs primarily come from camera pixels, the need for a
unified perception framework becomes increasingly evident. In this paper, we
propose to unify all modalities (text, tables, code, diagrams, images, etc) as
pixel inputs, i.e. "Perceive Everything as Pixels" (PEAP). We introduce
PixelWorld, a novel evaluation suite that unifies all the mentioned modalities
into pixel space to gauge the existing models' performance. Our findings show
that (1) PEAP outperforms baseline with token-based input in multimodal
datasets, benefiting from unified input for better disambiguation, (2)
significant declines in reasoning and coding capabilities across all models
when processing pixel-based input, underscoring the need to enhance foundation
models' perceptual abilities, (3) larger models can maintain strong performance
on non-reasoning tasks under PEAP, while smaller models like Phi-3.5-V suffer
significant performance degradation, (4) the attention pattern of PEAP is
highly aligned with text token input, (5) PEAP can be accelerated significantly
by exploiting the spatial sparsity. We conclude that the existing frontier
models are competent in pixel perception, however, there is still headroom for
improvement. Our code, dataset will be released upon acceptance.Summary
AI-Generated Summary