基于熵的自适应权重自训练方法
Entropy-Based Adaptive Weighting for Self-Training
March 31, 2025
作者: Xiaoxuan Wang, Yihe Deng, Mingyu Derek Ma, Wei Wang
cs.AI
摘要
大型语言模型的数学问题解决能力已成为研究焦点,利用自生成推理路径作为精炼和增强这些模型的一种有前景的方法正受到越来越多的关注。这些路径捕捉了逐步的逻辑过程,同时仅需正确答案作为监督。自训练方法在推理任务中已被证明有效,同时消除了对外部模型和人工标注的需求。然而,如何优化利用自生成数据进行模型训练仍是一个开放挑战。在本研究中,我们提出了基于熵的自适应加权自训练方法(EAST),这是一种旨在自训练过程中优先处理不确定性数据的自适应加权策略。具体而言,EAST采用了一个带有可调参数的映射函数,该参数控制加权的锐度,为模型表现出更大不确定性的数据分配更高权重。这种方法引导模型专注于更具信息量和挑战性的示例,从而提升其推理能力。我们在GSM8K和MATH基准上评估了我们的方法。实证结果表明,尽管基础方法在MATH上几乎未带来改进(0%),EAST相较于骨干模型实现了约1%的提升。在GSM8K上,EAST相比基础方法进一步获得了1-2%的性能提升。
English
The mathematical problem-solving capabilities of large language models have
become a focal point of research, with growing interests in leveraging
self-generated reasoning paths as a promising way to refine and enhance these
models. These paths capture step-by-step logical processes while requiring only
the correct answer for supervision. The self-training method has been shown to
be effective in reasoning tasks while eliminating the need for external models
and manual annotations. However, optimizing the use of self-generated data for
model training remains an open challenge. In this work, we propose
Entropy-Based Adaptive Weighting for Self-Training (EAST), an adaptive
weighting strategy designed to prioritize uncertain data during self-training.
Specifically, EAST employs a mapping function with a tunable parameter that
controls the sharpness of the weighting, assigning higher weights to data where
the model exhibits greater uncertainty. This approach guides the model to focus
on more informative and challenging examples, thereby enhancing its reasoning
ability. We evaluate our approach on GSM8K and MATH benchmarks. Empirical
results show that, while the vanilla method yields virtually no improvement
(0%) on MATH, EAST achieves around a 1% gain over backbone model. On GSM8K,
EAST attains a further 1-2% performance boost compared to the vanilla method.Summary
AI-Generated Summary