ChatPaper.aiChatPaper

面向可扩展量子机器学习的嵌入感知型量子-经典支持向量机

Embedding-Aware Quantum-Classical SVMs for Scalable Quantum Machine Learning

July 28, 2025
作者: Sebastián Andrés Cajas Ordóñez, Luis Fernando Torres Torres, Mario Bifulco, Carlos Andrés Durán, Cristian Bosch, Ricardo Simón Carbajo
cs.AI

摘要

量子支持向量机因高维量子态及硬件限制面临可扩展性挑战。我们提出了一种嵌入感知的量子-经典混合流程,结合了类别平衡的k均值蒸馏与预训练视觉Transformer嵌入。我们的关键发现是:ViT嵌入独特地实现了量子优势,在Fashion-MNIST上相比经典SVM提升了8.02%的准确率,在MNIST上提升了4.42%,而CNN特征则表现出性能下降。通过cuTensorNet进行的16量子比特张量网络模拟,我们首次系统性地证明了量子核优势高度依赖于嵌入选择,揭示了Transformer注意力机制与量子特征空间之间的根本协同效应。这为利用现代神经架构实现可扩展的量子机器学习提供了一条实用路径。
English
Quantum Support Vector Machines face scalability challenges due to high-dimensional quantum states and hardware limitations. We propose an embedding-aware quantum-classical pipeline combining class-balanced k-means distillation with pretrained Vision Transformer embeddings. Our key finding: ViT embeddings uniquely enable quantum advantage, achieving up to 8.02% accuracy improvements over classical SVMs on Fashion-MNIST and 4.42% on MNIST, while CNN features show performance degradation. Using 16-qubit tensor network simulation via cuTensorNet, we provide the first systematic evidence that quantum kernel advantage depends critically on embedding choice, revealing fundamental synergy between transformer attention and quantum feature spaces. This provides a practical pathway for scalable quantum machine learning that leverages modern neural architectures.
PDF42August 5, 2025