TIP-I2V:用於圖像生成視頻的百萬規模真實文本和圖像提示數據集
TIP-I2V: A Million-Scale Real Text and Image Prompt Dataset for Image-to-Video Generation
November 5, 2024
作者: Wenhao Wang, Yi Yang
cs.AI
摘要
影片生成模型正在革新內容創作,其中圖像到影片模型因其增強的可控性、視覺一致性和實際應用而受到越來越多的關注。然而,儘管這些模型很受歡迎,但它們仍依賴用戶提供的文本和圖像提示,目前還沒有專門用於研究這些提示的數據集。在本文中,我們介紹了 TIP-I2V,這是第一個大規模數據集,包含超過 170 萬個獨特的用戶提供的文本和圖像提示,專門用於圖像到影片生成。此外,我們提供了來自五種最先進的圖像到影片模型生成的相應影片。我們首先概述了策劃這一大規模數據集的耗時和昂貴過程。接下來,我們將 TIP-I2V 與兩個流行的提示數據集 VidProM(文本到影片)和DiffusionDB(文本到圖像)進行比較,突出了基本和語義信息方面的差異。該數據集促進了圖像到影片研究的進步。例如,為了開發更好的模型,研究人員可以使用 TIP-I2V 中的提示來分析用戶偏好並評估其訓練模型的多維性能;為了增強模型的安全性,他們可以專注於解決由圖像到影片模型引起的信息錯誤問題。TIP-I2V 激發的新研究以及與現有數據集的差異突顯了專門的圖像到影片提示數據集的重要性。該項目可在 https://tip-i2v.github.io 公開獲取。
English
Video generation models are revolutionizing content creation, with
image-to-video models drawing increasing attention due to their enhanced
controllability, visual consistency, and practical applications. However,
despite their popularity, these models rely on user-provided text and image
prompts, and there is currently no dedicated dataset for studying these
prompts. In this paper, we introduce TIP-I2V, the first large-scale dataset of
over 1.70 million unique user-provided Text and Image Prompts specifically for
Image-to-Video generation. Additionally, we provide the corresponding generated
videos from five state-of-the-art image-to-video models. We begin by outlining
the time-consuming and costly process of curating this large-scale dataset.
Next, we compare TIP-I2V to two popular prompt datasets, VidProM
(text-to-video) and DiffusionDB (text-to-image), highlighting differences in
both basic and semantic information. This dataset enables advancements in
image-to-video research. For instance, to develop better models, researchers
can use the prompts in TIP-I2V to analyze user preferences and evaluate the
multi-dimensional performance of their trained models; and to enhance model
safety, they may focus on addressing the misinformation issue caused by
image-to-video models. The new research inspired by TIP-I2V and the differences
with existing datasets emphasize the importance of a specialized image-to-video
prompt dataset. The project is publicly available at https://tip-i2v.github.io.