GPT4Tools:通过自我教育教授大型语言模型使用工具
GPT4Tools: Teaching Large Language Model to Use Tools via Self-instruction
May 30, 2023
作者: Rui Yang, Lin Song, Yanwei Li, Sijie Zhao, Yixiao Ge, Xiu Li, Ying Shan
cs.AI
摘要
本文旨在有效地使大型语言模型(LLMs)能够使用多模态工具。先进的专有LLMs,如ChatGPT和GPT-4,通过复杂的提示工程展现了利用工具的巨大潜力。然而,这些模型通常依赖于高昂的计算成本和公开不可访问的数据。为了解决这些挑战,我们提出了基于自我指导的GPT4Tools,以使开源LLMs,如LLaMA和OPT,能够使用工具。它通过提示一个先进的教师以各种多模态上下文来生成一个遵循指令的数据集。通过使用低秩适应(LoRA)优化,我们的方法促进了开源LLMs解决各种视觉问题,包括视觉理解和图像生成。此外,我们提供了一个基准来评估LLMs使用工具的能力,这在零-shot和微调方式下进行。大量实验证明了我们的方法对各种语言模型的有效性,不仅显著提高了调用已见工具的准确性,还实现了对未见工具的零-shot能力。代码和演示可在https://github.com/StevenGrove/GPT4Tools找到。
English
This paper aims to efficiently enable Large Language Models (LLMs) to use
multimodal tools. Advanced proprietary LLMs, such as ChatGPT and GPT-4, have
shown great potential for tool usage through sophisticated prompt engineering.
Nevertheless, these models typically rely on prohibitive computational costs
and publicly inaccessible data. To address these challenges, we propose the
GPT4Tools based on self-instruct to enable open-source LLMs, such as LLaMA and
OPT, to use tools. It generates an instruction-following dataset by prompting
an advanced teacher with various multi-modal contexts. By using the Low-Rank
Adaptation (LoRA) optimization, our approach facilitates the open-source LLMs
to solve a range of visual problems, including visual comprehension and image
generation. Moreover, we provide a benchmark to evaluate the ability of LLMs to
use tools, which is performed in both zero-shot and fine-tuning ways. Extensive
experiments demonstrate the effectiveness of our method on various language
models, which not only significantly improves the accuracy of invoking seen
tools, but also enables the zero-shot capacity for unseen tools. The code and
demo are available at https://github.com/StevenGrove/GPT4Tools.