ChatPaper.aiChatPaper

从说教到建构:将专家解决方案转化为可习得的推理过程

Didactic to Constructive: Turning Expert Solutions into Learnable Reasoning

February 2, 2026
作者: Ethan Mendes, Jungsoo Park, Alan Ritter
cs.AI

摘要

提升大型语言模型(LLMs)的推理能力通常依赖于两种途径:要么依靠模型自身采样出可被强化的正确解法,要么依赖能解决该问题的更强模型。然而,许多难题即使对当前前沿模型而言仍难以攻克,导致无法提取有效的训练信号。一种可行的替代方案是利用高质量的人类专家解法,但单纯模仿这类数据会因根本性的分布偏移而失效——专家解法通常具有教学性,其隐含的推理跳跃是为人类读者而非计算模型设计的。此外,高质量专家解法的获取成本高昂,亟需具备泛化能力的高样本效率训练方法。我们提出分布对齐模仿学习(DAIL),该方法通过两个步骤弥合分布差距:首先将专家解法转化为符合模型分布习惯的详细推理轨迹,再通过对比学习目标聚焦于专家洞察与方法论的学习。实验表明,DAIL仅需不足1000个高质量专家解法即可在Qwen2.5-Instruct和Qwen3模型上实现10-25%的pass@k提升,将推理效率提高2至4倍,并具备跨领域泛化能力。
English
Improving the reasoning capabilities of large language models (LLMs) typically relies either on the model's ability to sample a correct solution to be reinforced or on the existence of a stronger model able to solve the problem. However, many difficult problems remain intractable for even current frontier models, preventing the extraction of valid training signals. A promising alternative is to leverage high-quality expert human solutions, yet naive imitation of this data fails because it is fundamentally out of distribution: expert solutions are typically didactic, containing implicit reasoning gaps intended for human readers rather than computational models. Furthermore, high-quality expert solutions are expensive, necessitating generalizable sample-efficient training methods. We propose Distribution Aligned Imitation Learning (DAIL), a two-step method that bridges the distributional gap by first transforming expert solutions into detailed, in-distribution reasoning traces and then applying a contrastive objective to focus learning on expert insights and methodologies. We find that DAIL can leverage fewer than 1000 high-quality expert solutions to achieve 10-25% pass@k gains on Qwen2.5-Instruct and Qwen3 models, improve reasoning efficiency by 2x to 4x, and enable out-of-domain generalization.
PDF11February 5, 2026