TR2-D2:基于树搜索引导的轨迹感知离散扩散微调
TR2-D2: Tree Search Guided Trajectory-Aware Fine-Tuning for Discrete Diffusion
September 29, 2025
作者: Sophia Tang, Yuchen Zhu, Molei Tao, Pranam Chatterjee
cs.AI
摘要
随机最优控制下的强化学习为扩散微调提供了一个极具前景的框架,其中预训练的扩散模型被优化以生成导向奖励偏置分布的路径。尽管这些方法能够在无需访问最优分布显式样本的情况下进行优化,但它们需要在当前微调模型下对轨迹进行训练,这使得它们容易强化那些产生低回报的次优轨迹。为克服这一挑战,我们提出了基于树搜索引导的轨迹感知离散扩散微调框架(TR2-D2),该框架通过树搜索优化奖励引导的离散扩散轨迹,构建用于轨迹感知微调的回放缓冲区。这些缓冲区利用蒙特卡洛树搜索(MCTS)生成,随后用于在随机最优控制目标下微调预训练的离散扩散模型。我们在生物序列扩散模型的单目标和多目标微调上验证了该框架,凸显了TR2-D2在离散序列生成中实现可靠奖励引导微调的整体有效性。
English
Reinforcement learning with stochastic optimal control offers a promising
framework for diffusion fine-tuning, where a pre-trained diffusion model is
optimized to generate paths that lead to a reward-tilted distribution. While
these approaches enable optimization without access to explicit samples from
the optimal distribution, they require training on rollouts under the current
fine-tuned model, making them susceptible to reinforcing sub-optimal
trajectories that yield poor rewards. To overcome this challenge, we introduce
TRee Search Guided TRajectory-Aware Fine-Tuning for Discrete Diffusion
(TR2-D2), a novel framework that optimizes reward-guided discrete diffusion
trajectories with tree search to construct replay buffers for trajectory-aware
fine-tuning. These buffers are generated using Monte Carlo Tree Search (MCTS)
and subsequently used to fine-tune a pre-trained discrete diffusion model under
a stochastic optimal control objective. We validate our framework on single-
and multi-objective fine-tuning of biological sequence diffusion models,
highlighting the overall effectiveness of TR2-D2 for reliable reward-guided
fine-tuning in discrete sequence generation.