修剪過參數化多任務網絡以實現退化網絡圖像恢復
Pruning Overparameterized Multi-Task Networks for Degraded Web Image Restoration
October 16, 2025
作者: Thomas Katraouras, Dimitrios Rafailidis
cs.AI
摘要
图像质量是网络平台上呈现视觉吸引力内容的关键因素。然而,由于在线社交网络(OSNs)应用的有损操作,图像常常遭受质量下降,从而对用户体验产生负面影响。图像复原是从给定的退化输入中恢复出干净高质量图像的过程。近年来,多任务(一体化)图像复原模型因其能够同时处理不同类型的图像退化而受到广泛关注。然而,这些模型通常伴随着过多的可训练参数,导致计算效率低下。本文提出了一种压缩多任务图像复原模型的策略,旨在发现过度参数化的深度模型中的高度稀疏子网络,这些子网络能够匹配甚至超越其密集对应模型的性能。所提出的模型,即MIR-L,采用了一种迭代剪枝策略,该策略在多轮中移除低幅值权重,同时将剩余权重重置为其原始初始化状态。这一迭代过程对于多任务图像复原模型的优化至关重要,有效地揭示了在高稀疏度下保持或超越现有技术性能的“获胜彩票”。在去雨、去雾和去噪任务的基准数据集上的实验评估表明,MIR-L仅保留了10%的可训练参数,同时保持了高水平的图像复原性能。我们的代码、数据集和预训练模型已在https://github.com/Thomkat/MIR-L公开提供。
English
Image quality is a critical factor in delivering visually appealing content
on web platforms. However, images often suffer from degradation due to lossy
operations applied by online social networks (OSNs), negatively affecting user
experience. Image restoration is the process of recovering a clean high-quality
image from a given degraded input. Recently, multi-task (all-in-one) image
restoration models have gained significant attention, due to their ability to
simultaneously handle different types of image degradations. However, these
models often come with an excessively high number of trainable parameters,
making them computationally inefficient. In this paper, we propose a strategy
for compressing multi-task image restoration models. We aim to discover highly
sparse subnetworks within overparameterized deep models that can match or even
surpass the performance of their dense counterparts. The proposed model, namely
MIR-L, utilizes an iterative pruning strategy that removes low-magnitude
weights across multiple rounds, while resetting the remaining weights to their
original initialization. This iterative process is important for the multi-task
image restoration model's optimization, effectively uncovering "winning
tickets" that maintain or exceed state-of-the-art performance at high sparsity
levels. Experimental evaluation on benchmark datasets for the deraining,
dehazing, and denoising tasks shows that MIR-L retains only 10% of the
trainable parameters while maintaining high image restoration performance. Our
code, datasets and pre-trained models are made publicly available at
https://github.com/Thomkat/MIR-L.