ChatPaper.aiChatPaper

通过正交等价变换实现重参数化的大语言模型训练

Reparameterized LLM Training via Orthogonal Equivalence Transformation

June 9, 2025
作者: Zeju Qiu, Simon Buchholz, Tim Z. Xiao, Maximilian Dax, Bernhard Schölkopf, Weiyang Liu
cs.AI

摘要

尽管大规模语言模型(LLMs)正推动人工智能的快速发展,如何高效且可靠地训练这些大型模型仍是该领域面临的最重大挑战之一。为应对这一挑战,我们提出了POET,一种新颖的重参数化训练算法,它利用正交等价变换来优化神经元。具体而言,POET通过两个可学习的正交矩阵和一个固定的随机权重矩阵对每个神经元进行重参数化。由于其在理论上保证了权重矩阵谱特性的保持,POET能够稳定地优化目标函数,并提升泛化能力。我们进一步开发了高效的近似方法,使POET在训练大规模神经网络时既灵活又可扩展。大量实验验证了POET在训练LLMs中的有效性和可扩展性。
English
While large language models (LLMs) are driving the rapid advancement of artificial intelligence, effectively and reliably training these large models remains one of the field's most significant challenges. To address this challenge, we propose POET, a novel reParameterized training algorithm that uses Orthogonal Equivalence Transformation to optimize neurons. Specifically, POET reparameterizes each neuron with two learnable orthogonal matrices and a fixed random weight matrix. Because of its provable preservation of spectral properties of weight matrices, POET can stably optimize the objective function with improved generalization. We further develop efficient approximations that make POET flexible and scalable for training large-scale neural networks. Extensive experiments validate the effectiveness and scalability of POET in training LLMs.
PDF22June 12, 2025