ChatPaper.aiChatPaper

OmniGAIA:邁向原生全模態人工智慧代理

OmniGAIA: Towards Native Omni-Modal AI Agents

February 26, 2026
作者: Xiaoxi Li, Wenxiang Jiao, Jiarui Jin, Shijian Wang, Guanting Dong, Jiajie Jin, Hao Wang, Yinuo Wang, Ji-Rong Wen, Yuan Lu, Zhicheng Dou
cs.AI

摘要

人類智慧天然融合了全模態感知——涵蓋視覺、音頻與語言——並結合複雜推理與工具使用來與世界互動。然而,當前多模態大型語言模型主要侷限於雙模態交互(如視覺-語言),缺乏通用人工智慧助手所需的統一認知能力。為彌合這一差距,我們推出OmniGAIA:一個綜合性基準測試平台,旨在評估全模態智能體在處理涉及影片、音頻和圖像模態的深度推理與多輪工具執行任務時的表現。通過創新的全模態事件圖構建方法,OmniGAIA基於真實世界數據生成需要跨模態推理與外部工具整合的複雜多跳躍查詢。此外,我們提出OmniAtlas——一個在工具整合推理範式下具備主動全模態感知能力的原生全模態基礎智能體。通過採用後見之引導的樹狀探索策略合成訓練軌跡,並結合OmniDPO進行細粒度錯誤校正,OmniAtlas有效增強了現有開源模型的工具使用能力。此項工作標誌著我們向建構適用於真實場景的下一代原生全模態人工智慧助手邁出關鍵一步。
English
Human intelligence naturally intertwines omni-modal perception -- spanning vision, audio, and language -- with complex reasoning and tool usage to interact with the world. However, current multi-modal LLMs are primarily confined to bi-modal interactions (e.g., vision-language), lacking the unified cognitive capabilities required for general AI assistants. To bridge this gap, we introduce OmniGAIA, a comprehensive benchmark designed to evaluate omni-modal agents on tasks necessitating deep reasoning and multi-turn tool execution across video, audio, and image modalities. Constructed via a novel omni-modal event graph approach, OmniGAIA synthesizes complex, multi-hop queries derived from real-world data that require cross-modal reasoning and external tool integration. Furthermore, we propose OmniAtlas, a native omni-modal foundation agent under tool-integrated reasoning paradigm with active omni-modal perception. Trained on trajectories synthesized via a hindsight-guided tree exploration strategy and OmniDPO for fine-grained error correction, OmniAtlas effectively enhances the tool-use capabilities of existing open-source models. This work marks a step towards next-generation native omni-modal AI assistants for real-world scenarios.
PDF463February 28, 2026