表-R1:表格推理的推理時擴展
Table-R1: Inference-Time Scaling for Table Reasoning
May 29, 2025
作者: Zheyuan Yang, Lyuhao Chen, Arman Cohan, Yilun Zhao
cs.AI
摘要
在本研究中,我們首次探討了在表格推理任務上的推理時擴展方法。我們開發並評估了兩種訓練後策略來實現推理時擴展:基於前沿模型推理軌跡的知識蒸餾,以及帶有可驗證獎勵的強化學習(RLVR)。對於知識蒸餾,我們引入了一個由DeepSeek-R1生成的大規模推理軌跡數據集,並用它來微調大型語言模型(LLMs),從而得到Table-R1-SFT模型。對於RLVR,我們提出了任務特定的可驗證獎勵函數,並應用GRPO算法來獲得Table-R1-Zero模型。我們在多樣化的表格推理任務上評估了我們的Table-R1系列模型,包括簡答問答、事實驗證和自由形式問答。值得注意的是,Table-R1-Zero模型在僅使用7B參數的LLM的情況下,其性能匹配甚至超越了GPT-4.1和DeepSeek-R1。此外,它還展現出對域外數據集的強大泛化能力。廣泛的消融實驗和定性分析揭示了指令微調、模型架構選擇和跨任務泛化的益處,以及在RL訓練過程中基本表格推理技能的湧現。
English
In this work, we present the first study to explore inference-time scaling on
table reasoning tasks. We develop and evaluate two post-training strategies to
enable inference-time scaling: distillation from frontier model reasoning
traces and reinforcement learning with verifiable rewards (RLVR). For
distillation, we introduce a large-scale dataset of reasoning traces generated
by DeepSeek-R1, which we use to fine-tune LLMs into the Table-R1-SFT model. For
RLVR, we propose task-specific verifiable reward functions and apply the GRPO
algorithm to obtain the Table-R1-Zero model. We evaluate our Table-R1-series
models across diverse table reasoning tasks, including short-form QA, fact
verification, and free-form QA. Notably, the Table-R1-Zero model matches or
exceeds the performance of GPT-4.1 and DeepSeek-R1, while using only a
7B-parameter LLM. It also demonstrates strong generalization to out-of-domain
datasets. Extensive ablation and qualitative analyses reveal the benefits of
instruction tuning, model architecture choices, and cross-task generalization,
as well as emergence of essential table reasoning skills during RL training.Summary
AI-Generated Summary