使用擴散模型在影像之間進行插值
Interpolating between Images with Diffusion Models
July 24, 2023
作者: Clinton J. Wang, Polina Golland
cs.AI
摘要
在圖像生成和編輯中一個鮮為人知的前沿是在兩個輸入圖像之間進行插值的任務,這是當前所有部署的圖像生成管道中缺少的功能。我們認為這樣的功能可以擴展這些模型的創意應用,並提出了一種使用潛在擴散模型進行零樣本插值的方法。我們在潛在空間中應用插值,在一系列降噪水平上執行,然後進行以插值文本嵌入為條件的降噪,該文本嵌入來自文本反演和(可選)主題姿勢。為了更大的一致性,或者指定額外的標準,我們可以生成幾個候選項,並使用CLIP來選擇最高質量的圖像。我們獲得了跨不同主題姿勢、圖像風格和圖像內容的令人信服的插值,並展示了標準的定量指標如FID無法測量插值的質量。代碼和數據可在https://clintonjwang.github.io/interpolation找到。
English
One little-explored frontier of image generation and editing is the task of
interpolating between two input images, a feature missing from all currently
deployed image generation pipelines. We argue that such a feature can expand
the creative applications of such models, and propose a method for zero-shot
interpolation using latent diffusion models. We apply interpolation in the
latent space at a sequence of decreasing noise levels, then perform denoising
conditioned on interpolated text embeddings derived from textual inversion and
(optionally) subject poses. For greater consistency, or to specify additional
criteria, we can generate several candidates and use CLIP to select the highest
quality image. We obtain convincing interpolations across diverse subject
poses, image styles, and image content, and show that standard quantitative
metrics such as FID are insufficient to measure the quality of an
interpolation. Code and data are available at
https://clintonjwang.github.io/interpolation.