AstroReason-Bench:异构空间规划问题中统一智能体规划能力评估框架
AstroReason-Bench: Evaluating Unified Agentic Planning across Heterogeneous Space Planning Problems
January 16, 2026
作者: Weiyi Wang, Xinchi Chen, Jingjing Gong, Xuanjing Huang, Xipeng Qiu
cs.AI
摘要
近期,具身化大语言模型(LLMs)的研究进展使其成为能够跨领域推理与行动的通用规划器。然而现有智能体基准测试多聚焦于符号化或弱实体化环境,导致其在物理约束下的现实领域性能尚未得到充分探索。我们推出AstroReason-Bench——一个用于评估空间规划问题(SPP)中智能体规划能力的综合基准测试集。该问题族具有多目标异构、物理约束严格、决策跨度长等高风险特性。AstroReason-Bench整合了地面站通信、敏捷对地观测等多种调度机制,并提供统一的智能体交互协议。通过对一系列前沿开源与闭源LLM智能体系统的评估,我们发现当前智能体在专业求解器面前表现显著逊色,这揭示了通用规划器在现实约束下的核心局限。AstroReason-Bench为未来智能体研究提供了一个兼具挑战性与诊断性的测试平台。
English
Recent advances in agentic Large Language Models (LLMs) have positioned them as generalist planners capable of reasoning and acting across diverse tasks. However, existing agent benchmarks largely focus on symbolic or weakly grounded environments, leaving their performance in physics-constrained real-world domains underexplored. We introduce AstroReason-Bench, a comprehensive benchmark for evaluating agentic planning in Space Planning Problems (SPP), a family of high-stakes problems with heterogeneous objectives, strict physical constraints, and long-horizon decision-making. AstroReason-Bench integrates multiple scheduling regimes, including ground station communication and agile Earth observation, and provides a unified agent-oriented interaction protocol. Evaluating on a range of state-of-the-art open- and closed-source agentic LLM systems, we find that current agents substantially underperform specialized solvers, highlighting key limitations of generalist planning under realistic constraints. AstroReason-Bench offers a challenging and diagnostic testbed for future agentic research.