TREX:基于智能体驱动树状探索的LLM自动化微调方法
TREX: Automating LLM Fine-tuning via Agent-Driven Tree-based Exploration
April 15, 2026
作者: Zerun Ma, Guoqiang Wang, Xinchen Xie, Yicheng Chen, He Du, Bowen Li, Yanan Sun, Wenran Liu, Kai Chen, Yining Li
cs.AI
摘要
尽管大语言模型(LLM)已使AI研究智能体能够执行独立科研任务,但实现复杂现实工作流程(如LLM训练)的自动化仍面临重大挑战。本文提出TREX多智能体系统,可自动化执行LLM训练全生命周期。通过协调两大核心模块——研究器与执行器——该系统能无缝完成需求分析、开放域文献数据调研、训练策略制定、数据配方准备及模型训练评估。我们将多轮实验过程建模为搜索树,使系统能够高效规划探索路径、复用历史结果,并从迭代试验中提炼高层洞察。为评估自动化LLM训练能力,我们构建了FT-Bench基准测试集,包含源自真实场景的10项任务,涵盖基础模型能力优化到领域特定任务性能提升。实验结果表明,TREX智能体能持续优化模型在目标任务上的性能表现。
English
While Large Language Models (LLMs) have empowered AI research agents to perform isolated scientific tasks, automating complex, real-world workflows, such as LLM training, remains a significant challenge. In this paper, we introduce TREX, a multi-agent system that automates the entire LLM training life-cycle. By orchestrating collaboration between two core modules-the Researcher and the Executor-the system seamlessly performs requirement analysis, open-domain literature and data research, formulation of training strategies, preparation of data recipes, and model training and evaluation. The multi-round experimental process is modeled as a search tree, enabling the system to efficiently plan exploration paths, reuse historical results, and distill high-level insights from iterative trials. To evaluate the capability of automated LLM training, we construct FT-Bench, a benchmark comprising 10 tasks derived from real-world scenarios, ranging from optimizing fundamental model capabilities to enhancing performance on domain-specific tasks. Experimental results demonstrate that the TREX agent consistently optimizes model performance on target tasks.