基於熵的自適應權重自訓練方法
Entropy-Based Adaptive Weighting for Self-Training
March 31, 2025
作者: Xiaoxuan Wang, Yihe Deng, Mingyu Derek Ma, Wei Wang
cs.AI
摘要
大型語言模型的數學問題解決能力已成為研究焦點,利用自我生成的推理路徑作為精煉和增強這些模型的一種有前景方法,引起了越來越多的關注。這些路徑捕捉了逐步的邏輯過程,同時僅需正確答案作為監督。自我訓練方法在推理任務中已被證明是有效的,同時消除了對外部模型和手動註釋的需求。然而,如何優化使用自我生成的數據進行模型訓練仍是一個未解決的挑戰。在本研究中,我們提出了基於熵的自適應加權自我訓練(EAST),這是一種旨在在自我訓練過程中優先處理不確定數據的自適應加權策略。具體而言,EAST採用了一個具有可調參數的映射函數,該參數控制加權的銳度,為模型表現出更大不確定性的數據分配更高的權重。這種方法引導模型專注於更具信息量和挑戰性的示例,從而增強其推理能力。我們在GSM8K和MATH基準上評估了我們的方法。實驗結果顯示,雖然基礎方法在MATH上幾乎沒有改進(0%),但EAST相比於基礎模型實現了約1%的提升。在GSM8K上,EAST相比於基礎方法進一步獲得了1-2%的性能提升。
English
The mathematical problem-solving capabilities of large language models have
become a focal point of research, with growing interests in leveraging
self-generated reasoning paths as a promising way to refine and enhance these
models. These paths capture step-by-step logical processes while requiring only
the correct answer for supervision. The self-training method has been shown to
be effective in reasoning tasks while eliminating the need for external models
and manual annotations. However, optimizing the use of self-generated data for
model training remains an open challenge. In this work, we propose
Entropy-Based Adaptive Weighting for Self-Training (EAST), an adaptive
weighting strategy designed to prioritize uncertain data during self-training.
Specifically, EAST employs a mapping function with a tunable parameter that
controls the sharpness of the weighting, assigning higher weights to data where
the model exhibits greater uncertainty. This approach guides the model to focus
on more informative and challenging examples, thereby enhancing its reasoning
ability. We evaluate our approach on GSM8K and MATH benchmarks. Empirical
results show that, while the vanilla method yields virtually no improvement
(0%) on MATH, EAST achieves around a 1% gain over backbone model. On GSM8K,
EAST attains a further 1-2% performance boost compared to the vanilla method.Summary
AI-Generated Summary