利用学习的哈希探测技术压缩神经图形原语
Compact Neural Graphics Primitives with Learned Hash Probing
December 28, 2023
作者: Towaki Takikawa, Thomas Müller, Merlin Nimier-David, Alex Evans, Sanja Fidler, Alec Jacobson, Alexander Keller
cs.AI
摘要
神经图形基元在其神经网络通过包含在网格中排列的可训练特征的空间数据结构增强时,速度更快且质量更高。然而,现有的特征网格要么具有较大的内存占用(密集或分解网格、树和哈希表),要么性能较慢(索引学习和向量量化)。在本文中,我们展示了具有学习探针的哈希表既没有这些缺点,从而在大小和速度上取得了有利的组合。在相同质量下,推理速度比未经探测的哈希表更快,而训练仅慢1.2-2.6倍,明显优于先前的索引学习方法。我们通过将所有特征网格转化为一个共同的框架来得出这个公式:它们各自对应于一个查找函数,该函数索引到一个特征向量表中。在这个框架中,现有数据结构的查找函数可以通过对它们的索引进行简单算术组合来合并,从而实现帕累托最优压缩和速度。
English
Neural graphics primitives are faster and achieve higher quality when their
neural networks are augmented by spatial data structures that hold trainable
features arranged in a grid. However, existing feature grids either come with a
large memory footprint (dense or factorized grids, trees, and hash tables) or
slow performance (index learning and vector quantization). In this paper, we
show that a hash table with learned probes has neither disadvantage, resulting
in a favorable combination of size and speed. Inference is faster than unprobed
hash tables at equal quality while training is only 1.2-2.6x slower,
significantly outperforming prior index learning approaches. We arrive at this
formulation by casting all feature grids into a common framework: they each
correspond to a lookup function that indexes into a table of feature vectors.
In this framework, the lookup functions of existing data structures can be
combined by simple arithmetic combinations of their indices, resulting in
Pareto optimal compression and speed.