CPCANet:深度展開式共同主成分分析用於領域泛化
CPCANet: Deep Unfolding Common Principal Component Analysis for Domain Generalization
May 7, 2026
作者: Yu-Hsi Chen, Abd-Krim Seghouane
cs.AI
摘要
領域泛化(DG)旨在學習在分佈外(OOD)偏移下仍具穩健性的表徵,並有效泛化至未見過的目標領域。儘管近期的不變性學習策略與架構進展已取得優異表現,但透過二階統計量明確發現結構化領域不變子空間的方法仍未被充分探索。本研究提出CPCANet,這是一個奠基於共同主成分分析(CPCA)的新穎框架,將迭代的Flury-Gautschi(FG)演算法展開為完全可微的神經網路層。此方法將CPCA的統計特性整合至端到端可訓練的框架中,強制在多樣領域間發現共享子空間,同時保留可解釋性。在四個標準DG基準上的實驗顯示,CPCANet在零樣本遷移中達到當前最佳(SOTA)效能。此外,CPCANet具備架構無關性且無需針對特定資料集調整參數,提供了一種在分佈偏移下學習穩健表徵的簡單高效方法。程式碼已公開於 https://github.com/wish44165/CPCANet。
English
Domain Generalization (DG) aims to learn representations that remain robust under out-of-distribution (OOD) shifts and generalize effectively to unseen target domains. While recent invariant learning strategies and architectural advances have achieved strong performance, explicitly discovering a structured domain-invariant subspace through second-order statistics remains underexplored. In this work, we propose CPCANet, a novel framework grounded in Common Principal Component Analysis (CPCA), which unrolls the iterative Flury-Gautschi (FG) algorithm into fully differentiable neural layers. This approach integrates the statistical properties of CPCA into an end-to-end trainable framework, enforcing the discovery of a shared subspace across diverse domains while preserving interpretability. Experiments on four standard DG benchmarks demonstrate that CPCANet achieves state-of-the-art (SOTA) performance in zero-shot transfer. Moreover, CPCANet is architecture-agnostic and requires no dataset-specific tuning, providing a simple and efficient approach to learning robust representations under distribution shift. Code is available at https://github.com/wish44165/CPCANet.