PhysicsGen:生成模型能否從圖像中學習以預測複雜的物理關係?
PhysicsGen: Can Generative Models Learn from Images to Predict Complex Physical Relations?
March 7, 2025
作者: Martin Spitznagel, Jan Vaillant, Janis Keuper
cs.AI
摘要
生成學習模型的圖像到圖像轉換能力最近在估計圖像分佈之間的複雜(定向)映射方面取得了顯著進展。雖然基於外觀的任務,如圖像修復或風格轉移,已被深入研究,但我們提議探索生成模型在物理模擬背景下的潛力。通過提供一個包含30萬張圖像對的數據集以及針對三種不同物理模擬任務的基準評估,我們提出了一個基準來探討以下研究問題:i) 生成模型能否從輸入-輸出圖像對中學習複雜的物理關係?ii) 通過替代基於微分方程的模擬,可以實現多大的加速?雖然當前不同模型的基準評估顯示了實現高加速的潛力(ii),但這些結果也顯示出在物理正確性方面的強烈限制(i)。這強調了需要新方法來確保物理正確性。數據、基準模型和評估代碼請訪問http://www.physics-gen.org。
English
The image-to-image translation abilities of generative learning models have
recently made significant progress in the estimation of complex (steered)
mappings between image distributions. While appearance based tasks like image
in-painting or style transfer have been studied at length, we propose to
investigate the potential of generative models in the context of physical
simulations. Providing a dataset of 300k image-pairs and baseline evaluations
for three different physical simulation tasks, we propose a benchmark to
investigate the following research questions: i) are generative models able to
learn complex physical relations from input-output image pairs? ii) what
speedups can be achieved by replacing differential equation based simulations?
While baseline evaluations of different current models show the potential for
high speedups (ii), these results also show strong limitations toward the
physical correctness (i). This underlines the need for new methods to enforce
physical correctness. Data, baseline models and evaluation code
http://www.physics-gen.org.Summary
AI-Generated Summary