ChatPaper.aiChatPaper

通过随机生成与滚动预算强制实现流模型的推理时缩放

Inference-Time Scaling for Flow Models via Stochastic Generation and Rollover Budget Forcing

March 25, 2025
作者: Jaihoon Kim, Taehoon Yoon, Jisung Hwang, Minhyuk Sung
cs.AI

摘要

我们提出了一种针对预训练流模型的推理时缩放方法。近年来,推理时缩放在大语言模型和扩散模型中获得了广泛关注,通过利用额外的计算资源,提升了样本质量或更好地使输出与用户偏好对齐。对于扩散模型而言,粒子采样因其在中间去噪步骤中的随机性,实现了更高效的缩放。相比之下,尽管流模型作为扩散模型的替代方案日益流行——在顶尖的图像和视频生成模型中提供了更快的生成速度与高质量输出——但由于其确定性生成过程,适用于扩散模型的高效推理时缩放方法无法直接应用于流模型。为了在流模型中实现高效的推理时缩放,我们提出了三个核心构想:1)基于SDE(随机微分方程)的生成,使流模型能够进行粒子采样;2)插值转换,扩大搜索空间并增强样本多样性;3)滚动预算强制(RBF),一种跨时间步自适应分配计算资源以最大化预算利用的方法。实验表明,基于SDE的生成,特别是基于方差保持(VP)插值的生成,提升了流模型中粒子采样方法在推理时缩放的性能。此外,我们证明了结合VP-SDE的RBF方法达到了最佳性能,超越了所有先前的推理时缩放策略。
English
We propose an inference-time scaling approach for pretrained flow models. Recently, inference-time scaling has gained significant attention in LLMs and diffusion models, improving sample quality or better aligning outputs with user preferences by leveraging additional computation. For diffusion models, particle sampling has allowed more efficient scaling due to the stochasticity at intermediate denoising steps. On the contrary, while flow models have gained popularity as an alternative to diffusion models--offering faster generation and high-quality outputs in state-of-the-art image and video generative models--efficient inference-time scaling methods used for diffusion models cannot be directly applied due to their deterministic generative process. To enable efficient inference-time scaling for flow models, we propose three key ideas: 1) SDE-based generation, enabling particle sampling in flow models, 2) Interpolant conversion, broadening the search space and enhancing sample diversity, and 3) Rollover Budget Forcing (RBF), an adaptive allocation of computational resources across timesteps to maximize budget utilization. Our experiments show that SDE-based generation, particularly variance-preserving (VP) interpolant-based generation, improves the performance of particle sampling methods for inference-time scaling in flow models. Additionally, we demonstrate that RBF with VP-SDE achieves the best performance, outperforming all previous inference-time scaling approaches.

Summary

AI-Generated Summary

PDF334March 26, 2025