ChatPaper.aiChatPaper

EcoAssistant:更经济、更准确地使用LLM助手

EcoAssistant: Using LLM Assistant More Affordably and Accurately

October 3, 2023
作者: Jieyu Zhang, Ranjay Krishna, Ahmed H. Awadallah, Chi Wang
cs.AI

摘要

如今,用户将大型语言模型(LLMs)视为助手来回答需要外部知识的查询;他们询问特定城市的天气情况、股票价格,甚至询问自己社区内特定位置的所在地。这些查询要求LLM生成调用外部API以回答用户问题的代码,然而LLMs很少能在第一次尝试中生成正确的代码,需要在执行结果上进行迭代代码优化。此外,使用LLM助手支持大量查询可能成本高昂。在这项工作中,我们提出了一个名为EcoAssistant的框架,使LLMs能够更经济、更准确地回答基于代码的查询。EcoAssistant包含三个组件。首先,它允许LLM助手与自动代码执行器交流,以迭代地优化代码或根据执行结果生成答案。其次,我们使用LLM助手的层次结构,尝试使用较弱、更便宜的LLMs回答查询,然后再转向更强大、更昂贵的LLMs。第三,我们从过去成功查询中检索解决方案作为上下文演示,以帮助后续查询。从经验上看,我们展示了EcoAssistant在经济性和准确性方面具有明显优势,成功率比GPT-4高出10个百分点,成本不到GPT-4的50%。
English
Today, users ask Large language models (LLMs) as assistants to answer queries that require external knowledge; they ask about the weather in a specific city, about stock prices, and even about where specific locations are within their neighborhood. These queries require the LLM to produce code that invokes external APIs to answer the user's question, yet LLMs rarely produce correct code on the first try, requiring iterative code refinement upon execution results. In addition, using LLM assistants to support high query volumes can be expensive. In this work, we contribute a framework, EcoAssistant, that enables LLMs to answer code-driven queries more affordably and accurately. EcoAssistant contains three components. First, it allows the LLM assistants to converse with an automatic code executor to iteratively refine code or to produce answers based on the execution results. Second, we use a hierarchy of LLM assistants, which attempts to answer the query with weaker, cheaper LLMs before backing off to stronger, expensive ones. Third, we retrieve solutions from past successful queries as in-context demonstrations to help subsequent queries. Empirically, we show that EcoAssistant offers distinct advantages for affordability and accuracy, surpassing GPT-4 by 10 points of success rate with less than 50% of GPT-4's cost.
PDF61December 15, 2024