UniReason 1.0:面向世界知識校準圖像生成與編輯的統一推理框架
UniReason 1.0: A Unified Reasoning Framework for World Knowledge Aligned Image Generation and Editing
February 2, 2026
作者: Dianyi Wang, Chaofan Ma, Feng Han, Size Wu, Wei Song, Yibin Wang, Zhixiong Zhang, Tianhang Wang, Siyuan Wang, Zhongyu Wei, Jiaqi Wang
cs.AI
摘要
統一多模態模型在處理需要深度推理的複雜合成任務時往往表現不佳,通常將文本生成圖像與圖像編輯視為獨立能力而非相互關聯的推理步驟。為解決這一問題,我們提出UniReason框架,通過雙重推理範式協調這兩項任務。我們將生成任務定義為世界知識增強型規劃以注入隱性約束,並利用編輯能力進行細粒度視覺優化,通過自我反思進一步修正視覺誤差。這種方法在共享表徵空間內統一了生成與編輯,模擬了人類先規劃後優化的認知過程。為支撐該框架,我們系統性構建了涵蓋五大知識領域(如文化常識、物理學等)的大規模推理中心數據集(約30萬樣本)用於規劃,同時配備智能體生成的視覺自我校正語料庫。大量實驗表明,UniReason在WISE、KrisBench和UniREditBench等推理密集型基準測試中實現先進性能,同時保持卓越的通用合成能力。
English
Unified multimodal models often struggle with complex synthesis tasks that demand deep reasoning, and typically treat text-to-image generation and image editing as isolated capabilities rather than interconnected reasoning steps. To address this, we propose UniReason, a unified framework that harmonizes these two tasks through a dual reasoning paradigm. We formulate generation as world knowledge-enhanced planning to inject implicit constraints, and leverage editing capabilities for fine-grained visual refinement to further correct visual errors via self-reflection. This approach unifies generation and editing within a shared representation, mirroring the human cognitive process of planning followed by refinement. We support this framework by systematically constructing a large-scale reasoning-centric dataset (~300k samples) covering five major knowledge domains (e.g., cultural commonsense, physics, etc.) for planning, alongside an agent-generated corpus for visual self-correction. Extensive experiments demonstrate that UniReason achieves advanced performance on reasoning-intensive benchmarks such as WISE, KrisBench and UniREditBench, while maintaining superior general synthesis capabilities.