AssistGPT:一個能夠規劃、執行、檢查和學習的通用多模助理
AssistGPT: A General Multi-modal Assistant that can Plan, Execute, Inspect, and Learn
June 14, 2023
作者: Difei Gao, Lei Ji, Luowei Zhou, Kevin Qinghong Lin, Joya Chen, Zihan Fan, Mike Zheng Shou
cs.AI
摘要
最近對大型語言模型(LLMs)的研究已經在一般自然語言處理人工智慧助手方面取得了顯著進展。一些研究進一步探索了使用LLMs來規劃和調用模型或API以應對更一般的多模態用戶查詢。儘管取得了這一進展,由於視覺任務的多樣性,複雜的基於視覺的任務仍然具有挑戰性。這種多樣性體現在兩個方面:1)推理路徑。對於許多實際應用,僅通過檢視查詢本身很難準確地分解查詢。通常需要基於特定視覺內容和每個步驟的結果進行規劃。2)靈活的輸入和中間結果。對於野外案例,輸入形式可能是靈活的,不僅包括單個圖像或視頻,還包括視頻和圖像的混合,例如,用戶視圖圖像與一些參考視頻。此外,複雜的推理過程還將生成多樣的多模態中間結果,例如,視頻敘述,分段視頻剪輯等。為了應對這樣的一般情況,我們提出了一種多模態人工智慧助手AssistGPT,採用交錯的代碼和語言推理方法,稱為Plan,Execute,Inspect和Learn(PEIL)來將LLMs與各種工具集成。具體而言,規劃者能夠使用自然語言來規劃執行器中下一步應該做什麼,基於當前的推理進度。檢查器是一個高效的內存管理器,協助規劃者向特定工具提供適當的視覺信息。最後,由於整個推理過程既複雜又靈活,因此設計了一個學習者,使模型能夠自主探索並發現最佳解決方案。我們在A-OKVQA和NExT-QA基準上進行了實驗,取得了最先進的結果。此外,展示了我們的系統處理比基準中更複雜問題的能力。
English
Recent research on Large Language Models (LLMs) has led to remarkable
advancements in general NLP AI assistants. Some studies have further explored
the use of LLMs for planning and invoking models or APIs to address more
general multi-modal user queries. Despite this progress, complex visual-based
tasks still remain challenging due to the diverse nature of visual tasks. This
diversity is reflected in two aspects: 1) Reasoning paths. For many real-life
applications, it is hard to accurately decompose a query simply by examining
the query itself. Planning based on the specific visual content and the results
of each step is usually required. 2) Flexible inputs and intermediate results.
Input forms could be flexible for in-the-wild cases, and involves not only a
single image or video but a mixture of videos and images, e.g., a user-view
image with some reference videos. Besides, a complex reasoning process will
also generate diverse multimodal intermediate results, e.g., video narrations,
segmented video clips, etc. To address such general cases, we propose a
multi-modal AI assistant, AssistGPT, with an interleaved code and language
reasoning approach called Plan, Execute, Inspect, and Learn (PEIL) to integrate
LLMs with various tools. Specifically, the Planner is capable of using natural
language to plan which tool in Executor should do next based on the current
reasoning progress. Inspector is an efficient memory manager to assist the
Planner to feed proper visual information into a specific tool. Finally, since
the entire reasoning process is complex and flexible, a Learner is designed to
enable the model to autonomously explore and discover the optimal solution. We
conducted experiments on A-OKVQA and NExT-QA benchmarks, achieving
state-of-the-art results. Moreover, showcases demonstrate the ability of our
system to handle questions far more complex than those found in the benchmarks.