ChatPaper.aiChatPaper

表-R1:表格推理的推理时扩展

Table-R1: Inference-Time Scaling for Table Reasoning

May 29, 2025
作者: Zheyuan Yang, Lyuhao Chen, Arman Cohan, Yilun Zhao
cs.AI

摘要

在本研究中,我们首次探讨了在表格推理任务上的推理时扩展方法。我们开发并评估了两种后训练策略以实现推理时扩展:基于前沿模型推理轨迹的蒸馏和带有可验证奖励的强化学习(RLVR)。对于蒸馏方法,我们引入了一个由DeepSeek-R1生成的大规模推理轨迹数据集,并利用其将大型语言模型(LLMs)微调为Table-R1-SFT模型。在RLVR方面,我们提出了任务特定的可验证奖励函数,并应用GRPO算法训练出Table-R1-Zero模型。我们对Table-R1系列模型在多种表格推理任务上进行了评估,包括简短问答、事实核查和自由形式问答。值得注意的是,Table-R1-Zero模型在仅使用7B参数的大型语言模型的情况下,其性能与GPT-4.1和DeepSeek-R1相当甚至更优。此外,该模型在跨领域数据集上也展现出了强大的泛化能力。通过广泛的消融实验和定性分析,我们揭示了指令微调、模型架构选择以及跨任务泛化的优势,以及在强化学习训练过程中涌现出的关键表格推理技能。
English
In this work, we present the first study to explore inference-time scaling on table reasoning tasks. We develop and evaluate two post-training strategies to enable inference-time scaling: distillation from frontier model reasoning traces and reinforcement learning with verifiable rewards (RLVR). For distillation, we introduce a large-scale dataset of reasoning traces generated by DeepSeek-R1, which we use to fine-tune LLMs into the Table-R1-SFT model. For RLVR, we propose task-specific verifiable reward functions and apply the GRPO algorithm to obtain the Table-R1-Zero model. We evaluate our Table-R1-series models across diverse table reasoning tasks, including short-form QA, fact verification, and free-form QA. Notably, the Table-R1-Zero model matches or exceeds the performance of GPT-4.1 and DeepSeek-R1, while using only a 7B-parameter LLM. It also demonstrates strong generalization to out-of-domain datasets. Extensive ablation and qualitative analyses reveal the benefits of instruction tuning, model architecture choices, and cross-task generalization, as well as emergence of essential table reasoning skills during RL training.

Summary

AI-Generated Summary

PDF862May 30, 2025