针对退化网络图像修复的过参数化多任务网络剪枝
Pruning Overparameterized Multi-Task Networks for Degraded Web Image Restoration
October 16, 2025
作者: Thomas Katraouras, Dimitrios Rafailidis
cs.AI
摘要
图像质量是网络平台呈现视觉吸引内容的关键因素。然而,图像常因在线社交网络(OSNs)应用的有损操作而遭受质量下降,进而影响用户体验。图像修复是从给定的退化输入中恢复出清晰高质量图像的过程。近年来,多任务(一体化)图像修复模型因其能同时处理多种图像退化类型而受到广泛关注。但这些模型通常包含过多可训练参数,导致计算效率低下。本文提出了一种压缩多任务图像修复模型的策略,旨在从过度参数化的深度模型中发现高度稀疏的子网络,这些子网络能够匹配甚至超越其密集对应模型的性能。所提出的模型,即MIR-L,采用了一种迭代剪枝策略,通过多轮移除低幅值权重,同时将剩余权重重置为其初始值。这一迭代过程对于多任务图像修复模型的优化至关重要,有效揭示了在高稀疏度下仍能保持或超越当前最优性能的“获胜彩票”。在去雨、去雾和去噪任务的基准数据集上的实验评估表明,MIR-L仅保留了10%的可训练参数,同时保持了高水平的图像修复性能。我们的代码、数据集及预训练模型已公开于https://github.com/Thomkat/MIR-L。
English
Image quality is a critical factor in delivering visually appealing content
on web platforms. However, images often suffer from degradation due to lossy
operations applied by online social networks (OSNs), negatively affecting user
experience. Image restoration is the process of recovering a clean high-quality
image from a given degraded input. Recently, multi-task (all-in-one) image
restoration models have gained significant attention, due to their ability to
simultaneously handle different types of image degradations. However, these
models often come with an excessively high number of trainable parameters,
making them computationally inefficient. In this paper, we propose a strategy
for compressing multi-task image restoration models. We aim to discover highly
sparse subnetworks within overparameterized deep models that can match or even
surpass the performance of their dense counterparts. The proposed model, namely
MIR-L, utilizes an iterative pruning strategy that removes low-magnitude
weights across multiple rounds, while resetting the remaining weights to their
original initialization. This iterative process is important for the multi-task
image restoration model's optimization, effectively uncovering "winning
tickets" that maintain or exceed state-of-the-art performance at high sparsity
levels. Experimental evaluation on benchmark datasets for the deraining,
dehazing, and denoising tasks shows that MIR-L retains only 10% of the
trainable parameters while maintaining high image restoration performance. Our
code, datasets and pre-trained models are made publicly available at
https://github.com/Thomkat/MIR-L.