ChatPaper.aiChatPaper

DataChef:基于强化学习的LLM适配最优数据配方生成系统

DataChef: Cooking Up Optimal Data Recipes for LLM Adaptation via Reinforcement Learning

February 11, 2026
作者: Yicheng Chen, Zerun Ma, Xinchen Xie, Yining Li, Kai Chen
cs.AI

摘要

当前大语言模型(LLM)领域的发展中,大规模高质量训练数据的筛选是模型性能提升的主要驱动力。数据配方作为关键杠杆,包含将原始数据源转化为训练语料库的数据处理流程。尽管越来越多研究利用LLM自动化执行数据合成、过滤等单步数据处理操作,但数据配方的整体设计仍高度依赖人工,需要大量专业知识和反复调试。为突破这一瓶颈,我们提出了面向LLM适配的端到端数据配方自动生成框架:给定目标基准测试和可用数据源池,模型需输出完整数据配方,使基础LLM适配目标任务。我们推出的DataChef-32B采用在线强化学习策略,通过代理奖励函数预测候选配方在下游任务的表现。在六项保留测试任务中,DataChef-32B生成的实用配方达到与专家手工设计相当的下游性能。值得注意的是,该配方成功将Qwen3-1.7B-Base适配至数学领域,在AIME'25测试中取得66.7分,超越原版Qwen3-1.7B。这项研究为自动化LLM训练及开发自演进AI系统提供了新思路。
English
In the current landscape of Large Language Models (LLMs), the curation of large-scale, high-quality training data is a primary driver of model performance. A key lever is the data recipe, which comprises a data processing pipeline to transform raw sources into training corpora. Despite the growing use of LLMs to automate individual data processing steps, such as data synthesis and filtering, the overall design of data recipes remains largely manual and labor-intensive, requiring substantial human expertise and iteration. To bridge this gap, we formulate end-to-end data recipe generation for LLM adaptation. Given a target benchmark and a pool of available data sources, a model is required to output a complete data recipe that adapts a base LLM to the target task. We present DataChef-32B, which performs online reinforcement learning using a proxy reward that predicts downstream performance for candidate recipes. Across six held-out tasks, DataChef-32B produces practical recipes that reach comparable downstream performance to those curated by human experts. Notably, the recipe from DataChef-32B adapts Qwen3-1.7B-Base to the math domain, achieving 66.7 on AIME'25 and surpassing Qwen3-1.7B. This work sheds new light on automating LLM training and developing self-evolving AI systems.
PDF131February 13, 2026