自學自校正技術應用於小型語言模型
Self-Taught Self-Correction for Small Language Models
March 11, 2025
作者: Viktor Moskvoretskii, Chris Biemann, Irina Nikishina
cs.AI
摘要
儘管大型語言模型(LLMs)在各項任務中展現了卓越的性能,它們仍易於出錯。一個關鍵挑戰在於如何使其具備自我修正的能力。先前的研究多依賴外部工具或大型專有模型,而本研究則探索了小型語言模型(SLMs)通過僅使用自生成數據進行迭代微調來實現自我修正。我們提出了自學自修正(Self-Taught Self-Correction, STaSC)算法,該算法融合了多種算法設計選擇。在問答任務上的實驗結果表明,STaSC能有效學習自我修正,從而顯著提升性能。我們的分析進一步揭示了自我修正的機制,以及不同設計選擇對學習動態和整體性能的影響。為支持未來研究,我們開源了易於使用的代碼庫和輕量級模型。
English
Although large language models (LLMs) have achieved remarkable performance
across various tasks, they remain prone to errors. A key challenge is enabling
them to self-correct. While prior research has relied on external tools or
large proprietary models, this work explores self-correction in small language
models (SLMs) through iterative fine-tuning using solely self-generated data.
We introduce the Self-Taught Self-Correction (STaSC) algorithm, which
incorporates multiple algorithmic design choices. Experimental results on a
question-answering task demonstrate that STaSC effectively learns
self-correction, leading to significant performance improvements. Our analysis
further provides insights into the mechanisms of self-correction and the impact
of different design choices on learning dynamics and overall performance. To
support future research, we release our user-friendly codebase and lightweight
models.Summary
AI-Generated Summary