CPCANet:深度展开公共主成分分析用于域泛化
CPCANet: Deep Unfolding Common Principal Component Analysis for Domain Generalization
May 7, 2026
作者: Yu-Hsi Chen, Abd-Krim Seghouane
cs.AI
摘要
域泛化(DG)旨在学习在分布外(OOD)偏移下保持鲁棒性且能有效泛化至未见目标域的表示。尽管近期的不变学习策略与架构进展已取得优异性能,但通过二阶统计量显式发现结构化域不变子空间的方法仍鲜有探索。本研究提出CPCANet,一种基于公共主成分分析(CPCA)的新型框架,通过将迭代式Flury-Gautschi(FG)算法展开为完全可微的神经层,将CPCA的统计特性融入端到端可训练框架,在保持可解释性的同时强制发现跨多样域的共享子空间。在四个标准DG基准上的实验表明,CPCANet在零样本迁移中达到了最优(SOTA)性能。此外,CPCANet架构无关且无需针对数据集进行特定调参,为学习分布偏移下的鲁棒表示提供了简洁高效的方案。代码开源地址:https://github.com/wish44165/CPCANet。
English
Domain Generalization (DG) aims to learn representations that remain robust under out-of-distribution (OOD) shifts and generalize effectively to unseen target domains. While recent invariant learning strategies and architectural advances have achieved strong performance, explicitly discovering a structured domain-invariant subspace through second-order statistics remains underexplored. In this work, we propose CPCANet, a novel framework grounded in Common Principal Component Analysis (CPCA), which unrolls the iterative Flury-Gautschi (FG) algorithm into fully differentiable neural layers. This approach integrates the statistical properties of CPCA into an end-to-end trainable framework, enforcing the discovery of a shared subspace across diverse domains while preserving interpretability. Experiments on four standard DG benchmarks demonstrate that CPCANet achieves state-of-the-art (SOTA) performance in zero-shot transfer. Moreover, CPCANet is architecture-agnostic and requires no dataset-specific tuning, providing a simple and efficient approach to learning robust representations under distribution shift. Code is available at https://github.com/wish44165/CPCANet.