MiniAppBench:评估基于大语言模型的智能助手从文本到交互式HTML响应的范式转变
MiniAppBench: Evaluating the Shift from Text to Interactive HTML Responses in LLM-Powered Assistants
March 10, 2026
作者: Zuhao Zhang, Chengyue Yu, Yuante Li, Chenyi Zhuang, Linjian Mo, Shuai Li
cs.AI
摘要
随着大语言模型在代码生成领域的快速发展,人机交互正从静态文本响应演变为基于HTML的动态交互式应用程序,我们将其称为轻应用。这类应用不仅要求模型能渲染可视化界面,还需构建符合现实世界逻辑的定制化交互功能。然而现有基准测试主要关注算法正确性或静态布局重建,未能涵盖这一新范式所需的能力维度。为弥补这一空白,我们推出轻应用基准测试——首个用于评估原则驱动型交互式应用生成的综合基准。该基准源自真实场景下超千万次生成记录,最终提炼出涵盖游戏、科学、工具等六大领域的500项任务。针对开放式交互场景缺乏唯一标准答案的评估难题,我们进一步提出轻应用评估框架。该框架通过浏览器自动化技术执行类人探索性测试,从意图实现度、静态要素和动态交互三个维度系统评估应用质量。实验表明,当前大语言模型在生成高质量轻应用方面仍面临显著挑战,而轻应用评估框架与人工评估结果高度一致,为未来研究建立了可靠标准。相关代码已发布于github.com/MiniAppBench。
English
With the rapid advancement of Large Language Models (LLMs) in code generation, human-AI interaction is evolving from static text responses to dynamic, interactive HTML-based applications, which we term MiniApps. These applications require models to not only render visual interfaces but also construct customized interaction logic that adheres to real-world principles. However, existing benchmarks primarily focus on algorithmic correctness or static layout reconstruction, failing to capture the capabilities required for this new paradigm. To address this gap, we introduce MiniAppBench, the first comprehensive benchmark designed to evaluate principle-driven, interactive application generation. Sourced from a real-world application with 10M+ generations, MiniAppBench distills 500 tasks across six domains (e.g., Games, Science, and Tools). Furthermore, to tackle the challenge of evaluating open-ended interactions where no single ground truth exists, we propose MiniAppEval, an agentic evaluation framework. Leveraging browser automation, it performs human-like exploratory testing to systematically assess applications across three dimensions: Intention, Static, and Dynamic. Our experiments reveal that current LLMs still face significant challenges in generating high-quality MiniApps, while MiniAppEval demonstrates high alignment with human judgment, establishing a reliable standard for future research. Our code is available in github.com/MiniAppBench.