TabTune:面向表格基础模型推理与微调的统一库
TabTune: A Unified Library for Inference and Fine-Tuning Tabular Foundation Models
November 4, 2025
作者: Aditya Tanna, Pratinav Seth, Mohamed Bouadi, Utsav Avaiya, Vinay Kumar Sankarapu
cs.AI
摘要
表格基础模型正成为结构化数据学习的新范式,将大规模预训练的优势扩展至表格领域。然而,由于异构的预处理流程、碎片化的API接口、不一致的微调流程,以及缺乏面向部署的校准度与公平性等标准化评估指标,其应用仍受限。本文提出TabTune——通过统一接口标准化表格基础模型完整工作流的开源库。该库支持零样本推理、元学习、监督微调(SFT)和参数高效微调(PEFT)等多种适配策略,为七种前沿模型提供一致性访问。框架内置模型感知的自动化预处理流程,内部消化架构异构性,并集成性能、校准度与公平性评估模块。TabTune以可扩展性和可复现性为设计目标,为表格基础模型的适配策略提供标准化基准测试。该开源项目地址为:https://github.com/Lexsi-Labs/TabTune。
English
Tabular foundation models represent a growing paradigm in structured data
learning, extending the benefits of large-scale pretraining to tabular domains.
However, their adoption remains limited due to heterogeneous preprocessing
pipelines, fragmented APIs, inconsistent fine-tuning procedures, and the
absence of standardized evaluation for deployment-oriented metrics such as
calibration and fairness. We present TabTune, a unified library that
standardizes the complete workflow for tabular foundation models through a
single interface. TabTune provides consistent access to seven state-of-the-art
models supporting multiple adaptation strategies, including zero-shot
inference, meta-learning, supervised fine-tuning (SFT), and parameter-efficient
fine-tuning (PEFT). The framework automates model-aware preprocessing, manages
architectural heterogeneity internally, and integrates evaluation modules for
performance, calibration, and fairness. Designed for extensibility and
reproducibility, TabTune enables consistent benchmarking of adaptation
strategies of tabular foundation models. The library is open source and
available at https://github.com/Lexsi-Labs/TabTune .