ChatPaper.aiChatPaper

TabTune:表格基礎模型推理與微調的統一函式庫

TabTune: A Unified Library for Inference and Fine-Tuning Tabular Foundation Models

November 4, 2025
作者: Aditya Tanna, Pratinav Seth, Mohamed Bouadi, Utsav Avaiya, Vinay Kumar Sankarapu
cs.AI

摘要

表格基礎模型代表了結構化資料學習的新興範式,將大規模預訓練的優勢擴展至表格資料領域。然而,由於異質化的預處理流程、碎片化的應用程式介面、不一致的微調程序,以及缺乏針對部署指標(如校準度與公平性)的標準化評估,其應用仍受限。我們提出TabTune——一個透過單一介面標準化表格基礎模型完整工作流程的統一函式庫。TabTune提供七種支援多種適應策略的尖端模型,包含零樣本推理、元學習、監督式微調及參數高效微調。該框架自動化模型感知的預處理流程,內部管理架構異質性,並整合效能、校準度與公平性的評估模組。TabTune以可擴展性與可重現性為設計核心,能對表格基礎模型的適應策略進行一致性基準測試。本函式庫為開源項目,可於 https://github.com/Lexsi-Labs/TabTune 取得。
English
Tabular foundation models represent a growing paradigm in structured data learning, extending the benefits of large-scale pretraining to tabular domains. However, their adoption remains limited due to heterogeneous preprocessing pipelines, fragmented APIs, inconsistent fine-tuning procedures, and the absence of standardized evaluation for deployment-oriented metrics such as calibration and fairness. We present TabTune, a unified library that standardizes the complete workflow for tabular foundation models through a single interface. TabTune provides consistent access to seven state-of-the-art models supporting multiple adaptation strategies, including zero-shot inference, meta-learning, supervised fine-tuning (SFT), and parameter-efficient fine-tuning (PEFT). The framework automates model-aware preprocessing, manages architectural heterogeneity internally, and integrates evaluation modules for performance, calibration, and fairness. Designed for extensibility and reproducibility, TabTune enables consistent benchmarking of adaptation strategies of tabular foundation models. The library is open source and available at https://github.com/Lexsi-Labs/TabTune .
PDF142December 1, 2025