ChatPaper.aiChatPaper

GPT4Tools:透過自我教學教導大型語言模型使用工具

GPT4Tools: Teaching Large Language Model to Use Tools via Self-instruction

May 30, 2023
作者: Rui Yang, Lin Song, Yanwei Li, Sijie Zhao, Yixiao Ge, Xiu Li, Ying Shan
cs.AI

摘要

本文旨在有效地使大型語言模型(LLMs)能夠使用多模式工具。先進的專有LLMs,如ChatGPT和GPT-4,通過複雜的提示工程展示了對工具使用的巨大潛力。然而,這些模型通常依賴高昂的計算成本和不公開的數據。為應對這些挑戰,我們提出了基於自我指導的GPT4Tools,以使開源LLMs,如LLaMA和OPT,能夠使用工具。它通過提示高級教師以各種多模式上下文來生成一個遵循指示的數據集。通過使用低秩適應(LoRA)優化,我們的方法促進了開源LLMs解決一系列視覺問題,包括視覺理解和圖像生成。此外,我們提供了一個基準來評估LLMs使用工具的能力,這是通過零-shot和微調方式進行的。大量實驗證明了我們的方法對各種語言模型的有效性,不僅顯著提高了調用已見工具的準確性,還實現了對未見工具的零-shot能力。代碼和演示可在https://github.com/StevenGrove/GPT4Tools找到。
English
This paper aims to efficiently enable Large Language Models (LLMs) to use multimodal tools. Advanced proprietary LLMs, such as ChatGPT and GPT-4, have shown great potential for tool usage through sophisticated prompt engineering. Nevertheless, these models typically rely on prohibitive computational costs and publicly inaccessible data. To address these challenges, we propose the GPT4Tools based on self-instruct to enable open-source LLMs, such as LLaMA and OPT, to use tools. It generates an instruction-following dataset by prompting an advanced teacher with various multi-modal contexts. By using the Low-Rank Adaptation (LoRA) optimization, our approach facilitates the open-source LLMs to solve a range of visual problems, including visual comprehension and image generation. Moreover, we provide a benchmark to evaluate the ability of LLMs to use tools, which is performed in both zero-shot and fine-tuning ways. Extensive experiments demonstrate the effectiveness of our method on various language models, which not only significantly improves the accuracy of invoking seen tools, but also enables the zero-shot capacity for unseen tools. The code and demo are available at https://github.com/StevenGrove/GPT4Tools.
PDF41December 15, 2024