RPCANet++:面向稀疏目標分割的深度可解釋性魯棒主成分分析
RPCANet++: Deep Interpretable Robust PCA for Sparse Object Segmentation
August 6, 2025
作者: Fengyi Wu, Yimian Dai, Tianfang Zhang, Yixuan Ding, Jian Yang, Ming-Ming Cheng, Zhenming Peng
cs.AI
摘要
鲁棒主成分分析(RPCA)将观测矩阵分解为低秩背景与稀疏目标成分,这一能力使其在图像修复至分割等任务中得以广泛应用。然而,传统RPCA模型因矩阵运算带来的计算负担、对精细调参的依赖以及固定先验在动态场景下的适应性受限而存在不足。为解决这些局限,我们提出了RPCANet++,一个融合了RPCA可解释性与高效深度架构的稀疏目标分割框架。该方法将松弛的RPCA模型展开为一个结构化网络,包含背景近似模块(BAM)、目标提取模块(OEM)及图像恢复模块(IRM)。为减少BAM中阶段间传输损失,我们引入了记忆增强模块(MAM)以强化背景特征保留,同时深度对比先验模块(DCPM)利用显著性线索加速目标提取。在多种数据集上的广泛实验表明,RPCANet++在各类成像场景下均达到了最先进的性能水平。我们进一步通过视觉与数值的低秩性和稀疏性度量提升了模型的可解释性。通过结合RPCA的理论优势与深度网络的高效性,本方法为可靠且可解释的稀疏目标分割设立了新的基准。相关代码已发布于项目网页https://fengyiwu98.github.io/rpcanetx。
English
Robust principal component analysis (RPCA) decomposes an observation matrix
into low-rank background and sparse object components. This capability has
enabled its application in tasks ranging from image restoration to
segmentation. However, traditional RPCA models suffer from computational
burdens caused by matrix operations, reliance on finely tuned hyperparameters,
and rigid priors that limit adaptability in dynamic scenarios. To solve these
limitations, we propose RPCANet++, a sparse object segmentation framework that
fuses the interpretability of RPCA with efficient deep architectures. Our
approach unfolds a relaxed RPCA model into a structured network comprising a
Background Approximation Module (BAM), an Object Extraction Module (OEM), and
an Image Restoration Module (IRM). To mitigate inter-stage transmission loss in
the BAM, we introduce a Memory-Augmented Module (MAM) to enhance background
feature preservation, while a Deep Contrast Prior Module (DCPM) leverages
saliency cues to expedite object extraction. Extensive experiments on diverse
datasets demonstrate that RPCANet++ achieves state-of-the-art performance under
various imaging scenarios. We further improve interpretability via visual and
numerical low-rankness and sparsity measurements. By combining the theoretical
strengths of RPCA with the efficiency of deep networks, our approach sets a new
baseline for reliable and interpretable sparse object segmentation. Codes are
available at our Project Webpage https://fengyiwu98.github.io/rpcanetx.