ChatPaper.aiChatPaper

TinyGSM:使用小型语言模型在GSM8k上实现>80%

TinyGSM: achieving >80% on GSM8k with small language models

December 14, 2023
作者: Bingbin Liu, Sebastien Bubeck, Ronen Eldan, Janardhan Kulkarni, Yuanzhi Li, Anh Nguyen, Rachel Ward, Yi Zhang
cs.AI

摘要

小规模模型提供了各种计算优势,但规模对问题解决能力的关键程度仍然是一个悬而未决的问题。特别是对于解决小学数学问题,迄今为止在GSM8K基准测试中打破80\%障碍所需的最小模型规模仍然是34B。我们的研究探讨了高质量数据集如何成为小型语言模型获得数学推理能力的关键。我们引入了TinyGSM,这是一个由GPT-3.5完全生成的包含1230万个小学数学问题及其Python解决方案的合成数据集。在TinyGSM上微调后,我们发现一个由13亿生成模型和13亿验证模型组成的双模型组合可以实现81.5%的准确率,优于数量级更大的现有模型。这也与GPT-3.5“教师”模型(77.4%)的性能相媲美,我们的模型训练数据即由该模型生成。我们的方法简单明了,包括两个关键组成部分:1)高质量数据集TinyGSM,2)使用验证器,从多个候选生成中选择最终输出。
English
Small-scale models offer various computational advantages, and yet to which extent size is critical for problem-solving abilities remains an open question. Specifically for solving grade school math, the smallest model size so far required to break the 80\% barrier on the GSM8K benchmark remains to be 34B. Our work studies how high-quality datasets may be the key for small language models to acquire mathematical reasoning. We introduce TinyGSM, a synthetic dataset of 12.3M grade school math problems paired with Python solutions, generated fully by GPT-3.5. After finetuning on TinyGSM, we find that a duo of a 1.3B generation model and a 1.3B verifier model can achieve 81.5\% accuracy, outperforming existing models that are orders of magnitude larger. This also rivals the performance of the GPT-3.5 ``teacher'' model (77.4\%), from which our model's training data is generated. Our approach is simple and has two key components: 1) the high-quality dataset TinyGSM, 2) the use of a verifier, which selects the final outputs from multiple candidate generations.
PDF397December 15, 2024