ChatPaper.aiChatPaper

StructLM:构建结构化知识通用模型的方向 基础

StructLM: Towards Building Generalist Models for Structured Knowledge Grounding

February 26, 2024
作者: Alex Zhuang, Ge Zhang, Tianyu Zheng, Xinrun Du, Junjie Wang, Weiming Ren, Stephen W. Huang, Jie Fu, Xiang Yue, Wenhu Chen
cs.AI

摘要

结构化数据源,如表格、图形和数据库,是普遍的知识来源。尽管大型语言模型(LLMs)在处理纯文本方面表现出色,但它们在解释和利用结构化数据方面的能力仍然有限。我们的调查揭示了LLMs在处理结构化数据方面存在明显不足,例如,ChatGPT在平均落后于最先进模型(SoTA)35%。为了增强LLMs中的结构化知识基础(SKG)能力,我们开发了一个包含110万个示例的全面指导调整数据集。利用这个数据集,我们训练了一系列基于Code-LLaMA架构的模型,称为StructLM,参数范围从7B到34B。我们的StructLM系列在18个评估数据集中的14个上超越了特定任务模型,并在7个SKG任务上建立了新的SoTA成就。此外,StructLM在6个新颖的SKG任务上展现了出色的泛化能力。与预期相反,我们观察到扩大模型规模只带来了边际效益,StructLM-34B仅略优于StructLM-7B。这表明结构化知识基础仍然是一个具有挑战性的任务,需要更多创新设计来推动到一个新水平。
English
Structured data sources, such as tables, graphs, and databases, are ubiquitous knowledge sources. Despite the demonstrated capabilities of large language models (LLMs) on plain text, their proficiency in interpreting and utilizing structured data remains limited. Our investigation reveals a notable deficiency in LLMs' ability to process structured data, e.g., ChatGPT lags behind state-of-the-art (SoTA) model by an average of 35%. To augment the Structured Knowledge Grounding (SKG) capabilities in LLMs, we have developed a comprehensive instruction tuning dataset comprising 1.1 million examples. Utilizing this dataset, we train a series of models, referred to as StructLM, based on the Code-LLaMA architecture, ranging from 7B to 34B parameters. Our StructLM series surpasses task-specific models on 14 out of 18 evaluated datasets and establishes new SoTA achievements on 7 SKG tasks. Furthermore, StructLM demonstrates exceptional generalization across 6 novel SKG tasks. Contrary to expectations, we observe that scaling model size offers marginal benefits, with StructLM-34B showing only slight improvements over StructLM-7B. This suggests that structured knowledge grounding is still a challenging task and requires more innovative design to push to a new level.
PDF301December 15, 2024