MaGGIe:遮罩引導漸進式人類實例抠像
MaGGIe: Masked Guided Gradual Human Instance Matting
April 24, 2024
作者: Chuong Huynh, Seoung Wug Oh, Abhinav Shrivastava, Joon-Young Lee
cs.AI
摘要
人像抠图是圖像和視頻處理中的基礎任務,用於從輸入中提取人類前景像素。先前的研究要麼通過額外引導來提高準確性,要麼通過改進單個實例在幀間的時間一致性。我們提出了一個新的框架 MaGGIe,即 Masked Guided Gradual Human Instance Matting,它為每個人像實例逐步預測 alpha 抠圖,同時保持計算成本、精度和一致性。我們的方法利用現代架構,包括 transformer 注意力和稀疏卷積,以在不爆炸記憶體和延遲的情況下同時輸出所有實例抠圖。儘管在多實例情況下保持恆定的推理成本,我們的框架在我們提出的合成基準測試中實現了強大且多才多藝的性能。通過更高質量的圖像和視頻抠圖基準測試,從公開可用來源引入了新穎的多實例綜合方法,以增加模型在現實場景中的泛化能力。
English
Human matting is a foundation task in image and video processing, where human
foreground pixels are extracted from the input. Prior works either improve the
accuracy by additional guidance or improve the temporal consistency of a single
instance across frames. We propose a new framework MaGGIe, Masked Guided
Gradual Human Instance Matting, which predicts alpha mattes progressively for
each human instances while maintaining the computational cost, precision, and
consistency. Our method leverages modern architectures, including transformer
attention and sparse convolution, to output all instance mattes simultaneously
without exploding memory and latency. Although keeping constant inference costs
in the multiple-instance scenario, our framework achieves robust and versatile
performance on our proposed synthesized benchmarks. With the higher quality
image and video matting benchmarks, the novel multi-instance synthesis approach
from publicly available sources is introduced to increase the generalization of
models in real-world scenarios.Summary
AI-Generated Summary