ChatPaper.aiChatPaper

RenderFormer:基於Transformer的神經渲染技術,實現三角形網格的全局光照效果

RenderFormer: Transformer-based Neural Rendering of Triangle Meshes with Global Illumination

May 28, 2025
作者: Chong Zeng, Yue Dong, Pieter Peers, Hongzhi Wu, Xin Tong
cs.AI

摘要

我們提出了RenderFormer,這是一種神經渲染管線,能夠直接從基於三角形表示的場景中渲染出具有完整全局光照效果的圖像,且無需針對每個場景進行訓練或微調。與傳統的物理中心渲染方法不同,我們將渲染過程表述為一個序列到序列的轉換任務,其中代表帶有反射屬性三角形的符號序列被轉換為代表小塊像素的輸出符號序列。RenderFormer遵循兩階段管線:第一階段是視角無關的,模擬三角形間的光傳輸;第二階段則是視角依賴的,在視角無關階段生成的三角形序列指導下,將代表光線束的符號轉換為相應的像素值。這兩個階段均基於Transformer架構,並在極少先驗約束下進行學習。我們在形狀和光傳輸複雜度各異的場景上展示並評估了RenderFormer的性能。
English
We present RenderFormer, a neural rendering pipeline that directly renders an image from a triangle-based representation of a scene with full global illumination effects and that does not require per-scene training or fine-tuning. Instead of taking a physics-centric approach to rendering, we formulate rendering as a sequence-to-sequence transformation where a sequence of tokens representing triangles with reflectance properties is converted to a sequence of output tokens representing small patches of pixels. RenderFormer follows a two stage pipeline: a view-independent stage that models triangle-to-triangle light transport, and a view-dependent stage that transforms a token representing a bundle of rays to the corresponding pixel values guided by the triangle-sequence from the view-independent stage. Both stages are based on the transformer architecture and are learned with minimal prior constraints. We demonstrate and evaluate RenderFormer on scenes with varying complexity in shape and light transport.

Summary

AI-Generated Summary

PDF343May 29, 2025