RPCANet++:面向稀疏目标分割的深度可解释鲁棒主成分分析网络
RPCANet++: Deep Interpretable Robust PCA for Sparse Object Segmentation
August 6, 2025
作者: Fengyi Wu, Yimian Dai, Tianfang Zhang, Yixuan Ding, Jian Yang, Ming-Ming Cheng, Zhenming Peng
cs.AI
摘要
鲁棒主成分分析(RPCA)将观测矩阵分解为低秩背景和稀疏目标成分。这一能力使其在从图像修复到分割的多种任务中得以应用。然而,传统RPCA模型存在矩阵运算带来的计算负担、对精细调优超参数的依赖,以及刚性先验在动态场景中适应性受限的问题。为解决这些局限,我们提出了RPCANet++,一个融合了RPCA可解释性与高效深度架构的稀疏目标分割框架。我们的方法将松弛的RPCA模型展开为一个结构化网络,包含背景近似模块(BAM)、目标提取模块(OEM)和图像恢复模块(IRM)。为减少BAM中的阶段间传输损失,我们引入了记忆增强模块(MAM)以加强背景特征保留,同时深度对比先验模块(DCPM)利用显著性线索加速目标提取。在多种数据集上的广泛实验表明,RPCANet++在各种成像场景下均达到了最先进的性能。我们进一步通过视觉和数值的低秩性与稀疏性度量提升了模型的可解释性。通过结合RPCA的理论优势与深度网络的高效性,我们的方法为可靠且可解释的稀疏目标分割设立了新基准。代码可在我们的项目网页https://fengyiwu98.github.io/rpcanetx获取。
English
Robust principal component analysis (RPCA) decomposes an observation matrix
into low-rank background and sparse object components. This capability has
enabled its application in tasks ranging from image restoration to
segmentation. However, traditional RPCA models suffer from computational
burdens caused by matrix operations, reliance on finely tuned hyperparameters,
and rigid priors that limit adaptability in dynamic scenarios. To solve these
limitations, we propose RPCANet++, a sparse object segmentation framework that
fuses the interpretability of RPCA with efficient deep architectures. Our
approach unfolds a relaxed RPCA model into a structured network comprising a
Background Approximation Module (BAM), an Object Extraction Module (OEM), and
an Image Restoration Module (IRM). To mitigate inter-stage transmission loss in
the BAM, we introduce a Memory-Augmented Module (MAM) to enhance background
feature preservation, while a Deep Contrast Prior Module (DCPM) leverages
saliency cues to expedite object extraction. Extensive experiments on diverse
datasets demonstrate that RPCANet++ achieves state-of-the-art performance under
various imaging scenarios. We further improve interpretability via visual and
numerical low-rankness and sparsity measurements. By combining the theoretical
strengths of RPCA with the efficiency of deep networks, our approach sets a new
baseline for reliable and interpretable sparse object segmentation. Codes are
available at our Project Webpage https://fengyiwu98.github.io/rpcanetx.