可學習註冊的即時多視角頭部捕捉
Instant Multi-View Head Capture through Learnable Registration
June 12, 2023
作者: Timo Bolkart, Tianye Li, Michael J. Black
cs.AI
摘要
現有的捕獲3D頭部資料集的密集語義對應方法速度較慢,通常通過兩個獨立步驟來解決問題;多視圖立體(MVS)重建,然後是非剛性配准。為了簡化這個過程,我們引入了TEMPEH(Towards Estimation of 3D Meshes from Performances of Expressive Heads),以直接從校準的多視圖圖像中推斷密集對應的3D頭部。通常需要手動調參以找到準確配合掃描表面並對掃描噪聲和離群值具有魯棒性之間的平衡。相反,我們建議在訓練TEMPEH時聯合配准一個3D頭部資料集。具體來說,在訓練期間,我們最小化一個常用於表面配准的幾何損失,有效地利用TEMPEH作為正則化器。我們的多視圖頭部推斷建立在體積特徵表示上,該表示使用攝像機校準信息從每個視圖中採樣並融合特徵。為了考慮部分遮擋和允許頭部運動的大捕獲體積,我們使用視圖和表面感知特徵融合,以及基於空間變換器的頭部定位模塊。在訓練期間,我們使用原始MVS掃描作為監督,但一旦訓練完成,TEMPEH可以直接預測密集對應的3D頭部,而無需掃描。預測一個頭部約需0.3秒,中位重建誤差為0.26毫米,比當前最先進技術低64%。這使得能夠高效捕獲包含多個人和多樣面部運動的大型資料集。代碼、模型和數據可在https://tempeh.is.tue.mpg.de 公開獲得。
English
Existing methods for capturing datasets of 3D heads in dense semantic
correspondence are slow, and commonly address the problem in two separate
steps; multi-view stereo (MVS) reconstruction followed by non-rigid
registration. To simplify this process, we introduce TEMPEH (Towards Estimation
of 3D Meshes from Performances of Expressive Heads) to directly infer 3D heads
in dense correspondence from calibrated multi-view images. Registering datasets
of 3D scans typically requires manual parameter tuning to find the right
balance between accurately fitting the scans surfaces and being robust to
scanning noise and outliers. Instead, we propose to jointly register a 3D head
dataset while training TEMPEH. Specifically, during training we minimize a
geometric loss commonly used for surface registration, effectively leveraging
TEMPEH as a regularizer. Our multi-view head inference builds on a volumetric
feature representation that samples and fuses features from each view using
camera calibration information. To account for partial occlusions and a large
capture volume that enables head movements, we use view- and surface-aware
feature fusion, and a spatial transformer-based head localization module,
respectively. We use raw MVS scans as supervision during training, but, once
trained, TEMPEH directly predicts 3D heads in dense correspondence without
requiring scans. Predicting one head takes about 0.3 seconds with a median
reconstruction error of 0.26 mm, 64% lower than the current state-of-the-art.
This enables the efficient capture of large datasets containing multiple people
and diverse facial motions. Code, model, and data are publicly available at
https://tempeh.is.tue.mpg.de.