Re:Form —— 在LLM中运用RL减少可扩展形式化软件验证中的人类先验知识:基于Dafny的初步研究
Re:Form -- Reducing Human Priors in Scalable Formal Software Verification with RL in LLMs: A Preliminary Study on Dafny
July 22, 2025
作者: Chuanhao Yan, Fengdi Che, Xuhan Huang, Xu Xu, Xin Li, Yizhi Li, Xingwei Qu, Jingzhe Shi, Zhuangzhuang He, Chenghua Lin, Yaodong Yang, Binhang Yuan, Hang Zhao, Yu Qiao, Bowen Zhou, Jie Fu
cs.AI
摘要
现有基于非正式语言(如人类语言)的大型语言模型(LLMs)在强化学习(RL)训练中面临一个重大挑战:其验证过程,作为提供关键训练信号的手段,既不可靠也不具备可扩展性。实际上,主流的大型专有模型几乎无法生成可验证的程序。一个前景广阔但尚未充分探索的替代方案是基于形式语言的推理。将LLMs建立在严格的形式系统基础上,使生成模型在形式语言空间(如Dafny)中运作,能够自动且数学可证明地验证其推理过程与结果。这一能力对于实现大规模、可靠的软件形式验证至关重要。通常,人们会利用人工标注的思维链及其他人类先验知识来引导LLMs的推理与编码能力。然而,为监督复杂编程任务提供此类先验知识变得极其耗时,难以接受。在本研究中,我们系统性地探索了如何以形式语言Dafny为主要实验环境,减少对人类先验的依赖。我们的流程主要依赖于引入一个自动且可扩展的数据整理流程,并结合形式语言验证器的反馈进行精心设计的RL。我们提出了DafnyComp,一个包含自动形式化规范的组合形式程序基准,用于规范推理。通过监督微调(SFT)阶段,即便是小型模型(如0.5B)也能生成语法正确且可验证的Dafny代码,超越了专有模型。结合正则化的RL进一步提升了性能,在跨域任务上展现出更强的泛化能力,并在具有挑战性的DafnyComp基准测试中超越了所有强基线。
English
Existing informal language-based (e.g., human language) Large Language Models
(LLMs) trained with Reinforcement Learning (RL) face a significant challenge:
their verification processes, which provide crucial training signals, are
neither reliable nor scalable. In fact, the prevalent large proprietary models
could hardly generate verifiable programs. A promising yet largely uncharted
alternative is formal language-based reasoning. Grounding LLMs in rigorous
formal systems where generative models operate in formal language spaces (e.g.,
Dafny) enables the automatic and mathematically provable verification of their
reasoning processes and outcomes. This capability is pivotal for achieving
large-scale, reliable formal software verification. It is a common practice to
employ human-annotated chain-of-thought and other human priors to induce the
reasoning and coding capabilities of LLMs. Unfortunately, it becomes
unacceptably all-consuming to provide such priors for supervising complex
programming tasks. In this work, we systematically explore ways to reduce human
priors with the formal language, Dafny, as the main environment for our pilot
study. Our pipeline mainly relies on introducing an automatic and scalable data
curation pipeline, and careful RL designs integrated with feedback from the
formal language verifier. We introduce DafnyComp, a benchmark of compositional
formal programs with auto-formalized specifications for specification
reasoning. Our supervised fine-tuning (SFT) stage enables even small models
(e.g., 0.5B) to generate syntactically valid and verifiable Dafny code,
surpassing proprietary models. RL with regularization further improves
performance, achieving stronger generalization to out-of-domain tasks and
outperforming all strong baselines on the challenging DafnyComp benchmark.