OneHOI:统一人物-物体交互生成与编辑框架
OneHOI: Unifying Human-Object Interaction Generation and Editing
April 15, 2026
作者: Jiun Tian Hoe, Weipeng Hu, Xudong Jiang, Yap-Peng Tan, Chee Seng Chan
cs.AI
摘要
人-物交互(HOI)建模旨在捕捉人类如何作用于物体并与之建立联系,通常以<人物、动作、物体>三元组形式表达。现有方法分为两个独立分支:HOI生成根据结构化三元组和布局合成场景,但无法整合混合条件(如HOI与纯物体实体);而HOI编辑通过文本修改交互关系,却难以解耦姿态与物理接触,且难以扩展到多重交互。我们提出OneHOI——一个统一的扩散Transformer框架,通过共享的结构化交互表征驱动条件去噪过程,将HOI生成与编辑整合至单一流程。其核心关系扩散Transformer(R-DiT)通过角色与实例感知的HOI令牌、基于布局的空间动作定位、强化交互拓扑的结构化HOI注意力机制,以及解耦多重HOI场景的HOI旋转位置编码,对动词中介的关系进行建模。基于HOI-Edit-44K数据集及HOI与物体中心数据集的联合训练与模态丢弃策略,OneHOI支持布局引导、无布局、任意掩码及混合条件控制,在HOI生成与编辑任务上均达到最先进性能。代码已开源:https://jiuntian.github.io/OneHOI/。
English
Human-Object Interaction (HOI) modelling captures how humans act upon and relate to objects, typically expressed as <person, action, object> triplets. Existing approaches split into two disjoint families: HOI generation synthesises scenes from structured triplets and layout, but fails to integrate mixed conditions like HOI and object-only entities; and HOI editing modifies interactions via text, yet struggles to decouple pose from physical contact and scale to multiple interactions. We introduce OneHOI, a unified diffusion transformer framework that consolidates HOI generation and editing into a single conditional denoising process driven by shared structured interaction representations. At its core, the Relational Diffusion Transformer (R-DiT) models verb-mediated relations through role- and instance-aware HOI tokens, layout-based spatial Action Grounding, a Structured HOI Attention to enforce interaction topology, and HOI RoPE to disentangle multi-HOI scenes. Trained jointly with modality dropout on our HOI-Edit-44K, along with HOI and object-centric datasets, OneHOI supports layout-guided, layout-free, arbitrary-mask, and mixed-condition control, achieving state-of-the-art results across both HOI generation and editing. Code is available at https://jiuntian.github.io/OneHOI/.