ChatPaper.aiChatPaper

表-GPT:用于多样化表格任务的表格调整的GPT

Table-GPT: Table-tuned GPT for Diverse Table Tasks

October 13, 2023
作者: Peng Li, Yeye He, Dror Yashar, Weiwei Cui, Song Ge, Haidong Zhang, Danielle Rifinski Fainman, Dongmei Zhang, Surajit Chaudhuri
cs.AI

摘要

语言模型,如GPT-3.5和ChatGPT,展示了出色的能力,可以遵循各种人类指令并执行各种任务。然而,当使用一系列基本的表格理解任务来探究语言模型时,我们发现当今的语言模型在许多与表格相关的任务中仍然表现亚优,可能是因为它们主要在一维自然语言文本上进行预训练,而关系表是二维对象。 在这项工作中,我们提出了一种新的“表格微调”范式,我们继续训练/微调像GPT-3.5和ChatGPT这样的语言模型,使用从真实表格合成的多样化表格任务作为训练数据,旨在增强语言模型理解表格和执行表格任务的能力。我们展示了我们得到的Table-GPT模型表现出(1)更好的表格理解能力,通过在各种表格任务上持续优于普通的GPT-3.5和ChatGPT,包括保留未见任务,并且(2)强大的泛化能力,它能够回应各种人类指令来执行新的表格任务,类似于GPT-3.5和ChatGPT。
English
Language models, such as GPT-3.5 and ChatGPT, demonstrate remarkable abilities to follow diverse human instructions and perform a wide range of tasks. However, when probing language models using a range of basic table-understanding tasks, we observe that today's language models are still sub-optimal in many table-related tasks, likely because they are pre-trained predominantly on one-dimensional natural-language texts, whereas relational tables are two-dimensional objects. In this work, we propose a new "table-tuning" paradigm, where we continue to train/fine-tune language models like GPT-3.5 and ChatGPT, using diverse table-tasks synthesized from real tables as training data, with the goal of enhancing language models' ability to understand tables and perform table tasks. We show that our resulting Table-GPT models demonstrate (1) better table-understanding capabilities, by consistently outperforming the vanilla GPT-3.5 and ChatGPT, on a wide-range of table tasks, including holdout unseen tasks, and (2) strong generalizability, in its ability to respond to diverse human instructions to perform new table-tasks, in a manner similar to GPT-3.5 and ChatGPT.
PDF4112December 15, 2024