ChatPaper.aiChatPaper

Auto-SLURP:評估智能個人助理中多代理框架的基準數據集

Auto-SLURP: A Benchmark Dataset for Evaluating Multi-Agent Frameworks in Smart Personal Assistant

April 25, 2025
作者: Lei Shen, Xiaoyu Shen
cs.AI

摘要

近年來,基於大型語言模型(LLMs)的多智能體框架發展迅速。儘管取得了這些進展,專門用於評估其性能的基準數據集仍然顯著缺乏。為填補這一空白,我們引入了Auto-SLURP,這是一個旨在智能個人助理背景下評估基於LLM的多智能體框架的基準數據集。Auto-SLURP擴展了原始SLURP數據集——最初為自然語言理解任務開發——通過重新標記數據並整合模擬服務器和外部服務。這一增強功能實現了全面的端到端評估流程,涵蓋語言理解、任務執行和響應生成。我們的實驗表明,Auto-SLURP對當前最先進的框架提出了重大挑戰,強調真正可靠且智能的多智能體個人助理仍處於開發階段。該數據集及相關代碼可在https://github.com/lorashen/Auto-SLURP/ 獲取。
English
In recent years, multi-agent frameworks powered by large language models (LLMs) have advanced rapidly. Despite this progress, there is still a notable absence of benchmark datasets specifically tailored to evaluate their performance. To bridge this gap, we introduce Auto-SLURP, a benchmark dataset aimed at evaluating LLM-based multi-agent frameworks in the context of intelligent personal assistants. Auto-SLURP extends the original SLURP dataset -- initially developed for natural language understanding tasks -- by relabeling the data and integrating simulated servers and external services. This enhancement enables a comprehensive end-to-end evaluation pipeline, covering language understanding, task execution, and response generation. Our experiments demonstrate that Auto-SLURP presents a significant challenge for current state-of-the-art frameworks, highlighting that truly reliable and intelligent multi-agent personal assistants remain a work in progress. The dataset and related code are available at https://github.com/lorashen/Auto-SLURP/.

Summary

AI-Generated Summary

PDF21May 7, 2025