ChatPaper.aiChatPaper

Auto-SLURP:智能个人助手中多智能体框架评估的基准数据集

Auto-SLURP: A Benchmark Dataset for Evaluating Multi-Agent Frameworks in Smart Personal Assistant

April 25, 2025
作者: Lei Shen, Xiaoyu Shen
cs.AI

摘要

近年来,依托于大语言模型(LLMs)的多智能体框架发展迅速。尽管取得了这些进展,专门用于评估其性能的基准数据集仍然显著缺失。为填补这一空白,我们推出了Auto-SLURP,一个旨在评估基于LLM的多智能体框架在智能个人助理场景下表现的基准数据集。Auto-SLURP在原有SLURP数据集——最初为自然语言理解任务开发——的基础上,通过重新标注数据并整合模拟服务器与外部服务进行了扩展。这一增强措施构建了一个全面的端到端评估流程,涵盖语言理解、任务执行及响应生成等多个环节。我们的实验表明,Auto-SLURP对当前最先进的框架构成了显著挑战,揭示了真正可靠且智能的多智能体个人助理仍处于发展之中。该数据集及相关代码已公开于https://github.com/lorashen/Auto-SLURP/。
English
In recent years, multi-agent frameworks powered by large language models (LLMs) have advanced rapidly. Despite this progress, there is still a notable absence of benchmark datasets specifically tailored to evaluate their performance. To bridge this gap, we introduce Auto-SLURP, a benchmark dataset aimed at evaluating LLM-based multi-agent frameworks in the context of intelligent personal assistants. Auto-SLURP extends the original SLURP dataset -- initially developed for natural language understanding tasks -- by relabeling the data and integrating simulated servers and external services. This enhancement enables a comprehensive end-to-end evaluation pipeline, covering language understanding, task execution, and response generation. Our experiments demonstrate that Auto-SLURP presents a significant challenge for current state-of-the-art frameworks, highlighting that truly reliable and intelligent multi-agent personal assistants remain a work in progress. The dataset and related code are available at https://github.com/lorashen/Auto-SLURP/.

Summary

AI-Generated Summary

PDF21May 7, 2025