ChatPaper.aiChatPaper

人工干預在大語言模型精細調度中的糾纏效應

Artificial Entanglement in the Fine-Tuning of Large Language Models

January 11, 2026
作者: Min Chen, Zihan Wang, Canyu Chen, Zeguan Wu, Manling Li, Junyu Liu
cs.AI

摘要

大型语言模型(LLM)可通过仅修改少量可训练参数的参数高效微调(PEFT)方法适应新任务,这类方法常采用低秩更新策略。本研究从量子信息视角切入,旨在解析其有效性机理。基于该视角,低秩参数化天然对应低维矩阵乘积态(MPS)表示,从而可通过纠缠理论表征参数结构。据此,我们提出并度量"人工纠缠"的概念,其定义为人工神经网络(特指LLM)参数体系的纠缠熵。我们首先以LLaMA模型的1B和8B参数规模为研究对象,结合Tulu3与OpenThoughts3数据集,对比分析了代表性PEFT方法低秩适应(LoRA)与全参数微调(FFT):(i)LoRA中查询与价值投影矩阵更新呈现的"内部人工纠缠"遵循具有中心抑制特征的体积律(称为"纠缠谷"),该现象对超参数敏感且区别于FFT;(ii)注意力矩阵中表征符号间相关性的"外部人工纠缠"遵循带对数修正的面积律,且对LoRA超参数与训练步数保持稳健。通过类比黑洞物理中的"无毛定理",我们提出:尽管LoRA与FFT会引发不同的内部纠缠特征,但这些差异不会显现在注意力输出中,这种"无毛"特性可能是低秩更新有效的内在原因。我们进一步基于随机矩阵理论提供理论支撑,并将分析拓展至MPS自适应PEFT方法,发现其具有定性相似的行为模式。
English
Large language models (LLMs) can be adapted to new tasks using parameter-efficient fine-tuning (PEFT) methods that modify only a small number of trainable parameters, often through low-rank updates. In this work, we adopt a quantum-information-inspired perspective to understand their effectiveness. From this perspective, low-rank parameterizations naturally correspond to low-dimensional Matrix Product States (MPS) representations, which enable entanglement-based characterizations of parameter structure. Thereby, we term and measure "Artificial Entanglement", defined as the entanglement entropy of the parameters in artificial neural networks (in particular the LLMs). We first study the representative low-rank adaptation (LoRA) PEFT method, alongside full fine-tuning (FFT), using LLaMA models at the 1B and 8B scales trained on the Tulu3 and OpenThoughts3 datasets, and uncover: (i) Internal artificial entanglement in the updates of query and value projection matrices in LoRA follows a volume law with a central suppression (termed as the "Entanglement Valley"), which is sensitive to hyper-parameters and is distinct from that in FFT; (ii) External artificial entanglement in attention matrices, corresponding to token-token correlations in representation space, follows an area law with logarithmic corrections and remains robust to LoRA hyper-parameters and training steps. Drawing a parallel to the No-Hair Theorem in black hole physics, we propose that although LoRA and FFT induce distinct internal entanglement signatures, such differences do not manifest in the attention outputs, suggesting a "no-hair" property that results in the effectiveness of low rank updates. We further provide theoretical support based on random matrix theory, and extend our analysis to an MPS Adaptation PEFT method, which exhibits qualitatively similar behaviors.
PDF32January 31, 2026