超場:從文本實現 NeRF 的零樣本生成
HyperFields: Towards Zero-Shot Generation of NeRFs from Text
October 26, 2023
作者: Sudarshan Babu, Richard Liu, Avery Zhou, Michael Maire, Greg Shakhnarovich, Rana Hanocka
cs.AI
摘要
我們介紹了 HyperFields,一種用單個前向傳遞(可選進行一些微調)生成文本條件下神經輻射場(NeRFs)的方法。我們方法的關鍵在於:(i)動態超網絡,學習從文本標記嵌入到 NeRFs 空間的平滑映射;(ii)NeRF 蒸餾訓練,將編碼在單個 NeRFs 中的場景蒸餾為一個動態超網絡。這些技術使得單個網絡能夠擬合超過一百個獨特場景。我們進一步展示了 HyperFields 學習了更通用的文本與 NeRFs 之間的映射,因此能夠預測新的分布內和分布外場景,無論是零編碼還是經過一些微調步驟。HyperFields 的微調受益於學習到的通用映射,收斂加速,能夠比現有基於神經優化的方法快 5 到 10 倍合成新場景。我們的消融實驗表明,動態架構和 NeRF 蒸餾對於 HyperFields 的表達能力至關重要。
English
We introduce HyperFields, a method for generating text-conditioned Neural
Radiance Fields (NeRFs) with a single forward pass and (optionally) some
fine-tuning. Key to our approach are: (i) a dynamic hypernetwork, which learns
a smooth mapping from text token embeddings to the space of NeRFs; (ii) NeRF
distillation training, which distills scenes encoded in individual NeRFs into
one dynamic hypernetwork. These techniques enable a single network to fit over
a hundred unique scenes. We further demonstrate that HyperFields learns a more
general map between text and NeRFs, and consequently is capable of predicting
novel in-distribution and out-of-distribution scenes -- either zero-shot or
with a few finetuning steps. Finetuning HyperFields benefits from accelerated
convergence thanks to the learned general map, and is capable of synthesizing
novel scenes 5 to 10 times faster than existing neural optimization-based
methods. Our ablation experiments show that both the dynamic architecture and
NeRF distillation are critical to the expressivity of HyperFields.