OpenS2V-Nexus:一個詳盡的基準與百萬規模的主題至影片生成資料集
OpenS2V-Nexus: A Detailed Benchmark and Million-Scale Dataset for Subject-to-Video Generation
May 26, 2025
作者: Shenghai Yuan, Xianyi He, Yufan Deng, Yang Ye, Jinfa Huang, Bin Lin, Chongyang Ma, Jiebo Luo, Li Yuan
cs.AI
摘要
主題到視頻(S2V)生成旨在創建能夠忠實融入參考內容的視頻,從而提供更靈活的視頻製作方式。為了建立S2V生成的基礎設施,我們提出了OpenS2V-Nexus,該框架包括(i)OpenS2V-Eval,一個細粒度的基準測試,以及(ii)OpenS2V-5M,一個百萬規模的數據集。與現有繼承自VBench的S2V基準測試不同,後者主要關注生成視頻的全局和粗粒度評估,而OpenS2V-Eval則專注於模型生成主題一致視頻的能力,確保主題外觀自然且身份保真。為此,OpenS2V-Eval引入了來自七個主要S2V類別的180個提示,這些提示結合了真實和合成的測試數據。此外,為了精確對齊人類偏好與S2V基準測試,我們提出了三個自動化指標——NexusScore、NaturalScore和GmeScore,分別量化生成視頻中的主題一致性、自然度和文本相關性。基於此,我們對16個具有代表性的S2V模型進行了全面評估,突出了它們在不同內容上的優勢和不足。此外,我們創建了首個開源的大規模S2V生成數據集OpenS2V-5M,該數據集包含五百萬個高質量720P的主題-文本-視頻三元組。具體來說,我們通過(1)分割主題並通過跨視頻關聯建立配對信息,以及(2)在原始幀上提示GPT-Image-1以合成多視角表示,確保了數據集中主題信息的多樣性。通過OpenS2V-Nexus,我們提供了一個堅實的基礎設施,以加速未來S2V生成研究的發展。
English
Subject-to-Video (S2V) generation aims to create videos that faithfully
incorporate reference content, providing enhanced flexibility in the production
of videos. To establish the infrastructure for S2V generation, we propose
OpenS2V-Nexus, consisting of (i) OpenS2V-Eval, a fine-grained benchmark, and
(ii) OpenS2V-5M, a million-scale dataset. In contrast to existing S2V
benchmarks inherited from VBench that focus on global and coarse-grained
assessment of generated videos, OpenS2V-Eval focuses on the model's ability to
generate subject-consistent videos with natural subject appearance and identity
fidelity. For these purposes, OpenS2V-Eval introduces 180 prompts from seven
major categories of S2V, which incorporate both real and synthetic test data.
Furthermore, to accurately align human preferences with S2V benchmarks, we
propose three automatic metrics, NexusScore, NaturalScore and GmeScore, to
separately quantify subject consistency, naturalness, and text relevance in
generated videos. Building on this, we conduct a comprehensive evaluation of 16
representative S2V models, highlighting their strengths and weaknesses across
different content. Moreover, we create the first open-source large-scale S2V
generation dataset OpenS2V-5M, which consists of five million high-quality 720P
subject-text-video triples. Specifically, we ensure subject-information
diversity in our dataset by (1) segmenting subjects and building pairing
information via cross-video associations and (2) prompting GPT-Image-1 on raw
frames to synthesize multi-view representations. Through OpenS2V-Nexus, we
deliver a robust infrastructure to accelerate future S2V generation research.Summary
AI-Generated Summary