ChatPaper.aiChatPaper

TREX:基於智能體驅動樹狀探索的大語言模型自動化微調

TREX: Automating LLM Fine-tuning via Agent-Driven Tree-based Exploration

April 15, 2026
作者: Zerun Ma, Guoqiang Wang, Xinchen Xie, Yicheng Chen, He Du, Bowen Li, Yanan Sun, Wenran Liu, Kai Chen, Yining Li
cs.AI

摘要

尽管大语言模型(LLM)已使AI研究智能体能够完成独立的科研任务,但实现复杂现实工作流程(如LLM训练)的自动化仍面临重大挑战。本文提出TREX多智能体系统,可自动化执行完整的LLM训练生命周期。通过协调两大核心模块——研究器与执行器——的协作,该系统能够无缝实现需求分析、开放域文献与数据研究、训练策略制定、数据配方准备以及模型训练与评估。该系统将多轮实验过程建模为搜索树,使其能高效规划探索路径、复用历史结果,并从迭代试验中提炼高层洞察。为评估自动化LLM训练能力,我们构建了FT-Bench基准测试集,包含源自真实场景的10项任务,涵盖从优化基础模型能力到提升领域特定任务性能的多个维度。实验结果表明,TREX智能体能持续优化模型在目标任务上的性能表现。
English
While Large Language Models (LLMs) have empowered AI research agents to perform isolated scientific tasks, automating complex, real-world workflows, such as LLM training, remains a significant challenge. In this paper, we introduce TREX, a multi-agent system that automates the entire LLM training life-cycle. By orchestrating collaboration between two core modules-the Researcher and the Executor-the system seamlessly performs requirement analysis, open-domain literature and data research, formulation of training strategies, preparation of data recipes, and model training and evaluation. The multi-round experimental process is modeled as a search tree, enabling the system to efficiently plan exploration paths, reuse historical results, and distill high-level insights from iterative trials. To evaluate the capability of automated LLM training, we construct FT-Bench, a benchmark comprising 10 tasks derived from real-world scenarios, ranging from optimizing fundamental model capabilities to enhancing performance on domain-specific tasks. Experimental results demonstrate that the TREX agent consistently optimizes model performance on target tasks.
PDF91April 17, 2026