深度幾何化卡通線中間插值
Deep Geometrized Cartoon Line Inbetweening
September 28, 2023
作者: Li Siyao, Tianpei Gu, Weiye Xiao, Henghui Ding, Ziwei Liu, Chen Change Loy
cs.AI
摘要
我們旨在解決動畫行業中一個重要但鮮為人知的問題,即卡通線描的中間畫生成。中間畫生成涉及在兩幅黑白線描之間生成中間幀,這是一個耗時且昂貴的過程,可以從自動化中受益。然而,現有依賴匹配和整體光柵圖像變形的幀內插方法不適用於線描的中間畫生成,並且常常產生模糊的畫面,損害了複雜的線條結構。為了保留線描的精確性和細節,我們提出了一種新方法AnimeInbet,將光柵線描幀幀化為端點圖,將中間畫生成任務重新定義為具有頂點重定位的圖融合問題。我們的方法可以有效捕捉線描的稀疏性和獨特結構,同時在中間畫生成過程中保留細節。這是通過我們的新模塊實現的,即頂點幾何嵌入、頂點對應Transformer、頂點重定位的有效機制和可見性預測器。為了訓練我們的方法,我們引入了MixamoLine240,這是一個具有地面真實向量化和匹配標籤的新線描數據集。我們的實驗表明,AnimeInbet合成了高質量、乾淨且完整的中間線描,從定量和定性上優於現有方法,特別是在存在大幅運動的情況下。數據和代碼可在https://github.com/lisiyao21/AnimeInbet 上獲得。
English
We aim to address a significant but understudied problem in the anime
industry, namely the inbetweening of cartoon line drawings. Inbetweening
involves generating intermediate frames between two black-and-white line
drawings and is a time-consuming and expensive process that can benefit from
automation. However, existing frame interpolation methods that rely on matching
and warping whole raster images are unsuitable for line inbetweening and often
produce blurring artifacts that damage the intricate line structures. To
preserve the precision and detail of the line drawings, we propose a new
approach, AnimeInbet, which geometrizes raster line drawings into graphs of
endpoints and reframes the inbetweening task as a graph fusion problem with
vertex repositioning. Our method can effectively capture the sparsity and
unique structure of line drawings while preserving the details during
inbetweening. This is made possible via our novel modules, i.e., vertex
geometric embedding, a vertex correspondence Transformer, an effective
mechanism for vertex repositioning and a visibility predictor. To train our
method, we introduce MixamoLine240, a new dataset of line drawings with ground
truth vectorization and matching labels. Our experiments demonstrate that
AnimeInbet synthesizes high-quality, clean, and complete intermediate line
drawings, outperforming existing methods quantitatively and qualitatively,
especially in cases with large motions. Data and code are available at
https://github.com/lisiyao21/AnimeInbet.