ChatPaper.aiChatPaper

疊加梯度下降法:運用量子原理進行模型訓練

Superpositional Gradient Descent: Harnessing Quantum Principles for Model Training

November 1, 2025
作者: Ahmet Erdem Pamuk, Emir Kaan Özdemir, Şuayp Talha Kocabay
cs.AI

摘要

大型語言模型(LLMs)日益傾向採用如AdamW等經典優化技術進行訓練,以提升收斂性與泛化能力。然而,量子啟發式方法增強經典訓練的具體機制仍待深入探索。本文提出疊加梯度下降法(SGD),這種新型優化器通過注入量子電路擾動,將梯度更新與量子疊加態建立聯繫。我們構建了數學框架,並在PyTorch與Qiskit中實現混合量子-經典電路。在合成序列分類與大規模LLM微調任務中,SGD相較AdamW展現出更快的收斂速度與更低的終極損失值。儘管成果顯著,可擴展性與硬體限制仍阻礙其實際應用。本研究為量子計算與深度學習的交叉領域提供新視角,指出利用量子原理調控與增強模型行為的可行路徑。
English
Large language models (LLMs) are increasingly trained with classical optimization techniques like AdamW to improve convergence and generalization. However, the mechanisms by which quantum-inspired methods enhance classical training remain underexplored. We introduce Superpositional Gradient Descent (SGD), a novel optimizer linking gradient updates with quantum superposition by injecting quantum circuit perturbations. We present a mathematical framework and implement hybrid quantum-classical circuits in PyTorch and Qiskit. On synthetic sequence classification and large-scale LLM fine-tuning, SGD converges faster and yields lower final loss than AdamW. Despite promising results, scalability and hardware constraints limit adoption. Overall, this work provides new insights into the intersection of quantum computing and deep learning, suggesting practical pathways for leveraging quantum principles to control and enhance model behavior.
PDF112December 1, 2025