ChatPaper.aiChatPaper

从数据中学习游戏的潜在规则:国际象棋故事

Learning the Latent Rules of a Game from Data: A Chess Story

October 3, 2024
作者: Ben Fauber
cs.AI

摘要

我们展示了,具有数百万参数的小型预训练基础生成语言模型可以从与该过程相关的数据中学习过程的潜在规则。受斯特凡·茨威格的中篇小说《国王游戏》启发,我们展示了具有 28M 和 125M 参数的预训练基础小语言模型(SLMs)可以通过 1,000 到 1,000,000 个示例进行指导微调,以学习国际象棋的规则,提出合法移动,并准确解决国际象棋问题。我们还探讨了连续语言模型微调时期对改善结果的影响,并通过增加指导微调示例的数量展示了减少模型幻觉的效果。
English
We demonstrate that small pretrained foundational generative language models with millions of parameters can learn the latent rules of a process from data associated with the process. Inspired by Stefan Zweig's novella "Schachnovelle," also known as "The Royal Game" in English, we show that 28M and 125M parameter pretrained foundational small language models (SLMs) can be instruction fine-tuned with 1,000-to-1,000,000 examples to learn the rules of chess, propose legal moves, and accurately solve chess problems. We also explore the impact of successive language model fine-tuning epochs on improved outcomes and demonstrate reductions in model hallucinations by increasing the number of instruction fine-tuning examples.

Summary

AI-Generated Summary

PDF52November 16, 2024