ChatPaper.aiChatPaper

通過學習的哈希探測簡化神經圖形基元

Compact Neural Graphics Primitives with Learned Hash Probing

December 28, 2023
作者: Towaki Takikawa, Thomas Müller, Merlin Nimier-David, Alex Evans, Sanja Fidler, Alec Jacobson, Alexander Keller
cs.AI

摘要

神經圖形基元在其神經網絡被空間數據結構增強時,速度更快且品質更高。這些空間數據結構包含以網格排列的可訓練特徵。然而,現有的特徵網格要麼佔用大量內存(密集或分解網格、樹狀結構和哈希表),要麼性能較慢(索引學習和向量量化)。本文展示了通過具有學習探針的哈希表,既沒有這些缺點,又實現了大小和速度的有利組合。在相同品質下,推理速度比未經探測的哈希表更快,而訓練速度僅慢1.2-2.6倍,明顯優於先前的索引學習方法。我們通過將所有特徵網格轉換為一個共同框架來得出這個公式:它們各自對應於一個查找函數,該函數索引到一個特徵向量表中。在這個框架中,現有數據結構的查找函數可以通過簡單的索引算術組合來組合,實現帕累托最優壓縮和速度。
English
Neural graphics primitives are faster and achieve higher quality when their neural networks are augmented by spatial data structures that hold trainable features arranged in a grid. However, existing feature grids either come with a large memory footprint (dense or factorized grids, trees, and hash tables) or slow performance (index learning and vector quantization). In this paper, we show that a hash table with learned probes has neither disadvantage, resulting in a favorable combination of size and speed. Inference is faster than unprobed hash tables at equal quality while training is only 1.2-2.6x slower, significantly outperforming prior index learning approaches. We arrive at this formulation by casting all feature grids into a common framework: they each correspond to a lookup function that indexes into a table of feature vectors. In this framework, the lookup functions of existing data structures can be combined by simple arithmetic combinations of their indices, resulting in Pareto optimal compression and speed.
PDF71December 15, 2024