ChatPaper.aiChatPaper

MMHU:大規模多模態人類行為理解基準

MMHU: A Massive-Scale Multimodal Benchmark for Human Behavior Understanding

July 16, 2025
作者: Renjie Li, Ruijie Ye, Mingyang Wu, Hao Frank Yang, Zhiwen Fan, Hezhen Hu, Zhengzhong Tu
cs.AI

摘要

人類是交通生態系統中不可或缺的組成部分,理解其行為對於促進安全駕駛系統的發展至關重要。儘管近期的研究已探索了人類行為的多個方面——如動作、軌跡和意圖——但在自動駕駛領域,評估人類行為理解的綜合基準仍然缺失。在本研究中,我們提出了MMHU,這是一個大規模的人類行為分析基準,具備豐富的註釋,包括人類動作與軌跡、動作的文字描述、人類意圖以及與駕駛安全相關的關鍵行為標籤。我們的數據集涵蓋了來自多樣化來源的57,000個人類動作片段和173萬幀圖像,這些來源包括已建立的駕駛數據集(如Waymo)、來自YouTube的真實場景視頻以及自行收集的數據。我們開發了一個人機協作的註釋流程,以生成豐富的行為描述。我們提供了詳盡的數據集分析,並對多項任務進行了基準測試——從動作預測到動作生成,再到人類行為問答——從而提供了一個廣泛的評估套件。項目頁面:https://MMHU-Benchmark.github.io。
English
Humans are integral components of the transportation ecosystem, and understanding their behaviors is crucial to facilitating the development of safe driving systems. Although recent progress has explored various aspects of human behaviorx2014such as motion, trajectories, and intentionx2014a comprehensive benchmark for evaluating human behavior understanding in autonomous driving remains unavailable. In this work, we propose MMHU, a large-scale benchmark for human behavior analysis featuring rich annotations, such as human motion and trajectories, text description for human motions, human intention, and critical behavior labels relevant to driving safety. Our dataset encompasses 57k human motion clips and 1.73M frames gathered from diverse sources, including established driving datasets such as Waymo, in-the-wild videos from YouTube, and self-collected data. A human-in-the-loop annotation pipeline is developed to generate rich behavior captions. We provide a thorough dataset analysis and benchmark multiple tasksx2014ranging from motion prediction to motion generation and human behavior question answeringx2014thereby offering a broad evaluation suite. Project page : https://MMHU-Benchmark.github.io.
PDF191July 17, 2025