推理SQL:基於SQL定制部分獎勵的強化學習 用於推理增強的文本到SQL轉換
Reasoning-SQL: Reinforcement Learning with SQL Tailored Partial Rewards for Reasoning-Enhanced Text-to-SQL
March 29, 2025
作者: Mohammadreza Pourreza, Shayan Talaei, Ruoxi Sun, Xingchen Wan, Hailong Li, Azalia Mirhoseini, Amin Saberi, Sercan "O. Arik
cs.AI
摘要
文本到SQL(Text-to-SQL)是一項具有挑戰性的任務,涉及多個需要深度推理的子任務,包括自然語言理解、數據庫模式理解以及精確的SQL查詢構建。現有方法通常依賴於手工設計的推理路徑,這些路徑帶有歸納偏見,可能限制其整體效能。受到近期如DeepSeek R1和OpenAI o1等推理增強模型成功的啟發,這些模型有效利用獎勵驅動的自我探索來提升推理能力和泛化能力,我們提出了一套專為Text-to-SQL任務量身定制的部分獎勵機制。我們的獎勵集包括模式鏈接、AI反饋、n-gram相似度和語法檢查,這些設計旨在解決強化學習(RL)中普遍存在的獎勵稀疏問題。通過採用群組相對策略優化(GRPO),我們的方法明確鼓勵大型語言模型(LLMs)發展出生成準確SQL查詢所需的內在推理技能。通過不同規模的模型,我們展示了僅使用我們提出的獎勵進行RL訓練,相比於監督微調(SFT),能夠持續實現更高的準確性和更優的泛化能力。值得注意的是,我們經過RL訓練的14B參數模型在BIRD基準測試中顯著超越了更大的專有模型,例如o3-mini高出4%,Gemini-1.5-Pro-002高出3%。這些成果凸顯了我們提出的帶有部分獎勵的RL訓練框架在提升Text-to-SQL任務準確性和推理能力方面的有效性。
English
Text-to-SQL is a challenging task involving multiple reasoning-intensive
subtasks, including natural language understanding, database schema
comprehension, and precise SQL query formulation. Existing approaches often
rely on handcrafted reasoning paths with inductive biases that can limit their
overall effectiveness. Motivated by the recent success of reasoning-enhanced
models such as DeepSeek R1 and OpenAI o1, which effectively leverage
reward-driven self-exploration to enhance reasoning capabilities and
generalization, we propose a novel set of partial rewards tailored specifically
for the Text-to-SQL task. Our reward set includes schema-linking, AI feedback,
n-gram similarity, and syntax check, explicitly designed to address the reward
sparsity issue prevalent in reinforcement learning (RL). Leveraging group
relative policy optimization (GRPO), our approach explicitly encourages large
language models (LLMs) to develop intrinsic reasoning skills necessary for
accurate SQL query generation. With models of different sizes, we demonstrate
that RL-only training with our proposed rewards consistently achieves higher
accuracy and superior generalization compared to supervised fine-tuning (SFT).
Remarkably, our RL-trained 14B-parameter model significantly outperforms larger
proprietary models, e.g. o3-mini by 4% and Gemini-1.5-Pro-002 by 3% on the BIRD
benchmark. These highlight the efficacy of our proposed RL-training framework
with partial rewards for enhancing both accuracy and reasoning capabilities in
Text-to-SQL tasks.Summary
AI-Generated Summary