ChatPaper.aiChatPaper

彩票路由:面向异构数据的自适应子网络

Routing the Lottery: Adaptive Subnetworks for Heterogeneous Data

January 29, 2026
作者: Grzegorz Stefanski, Alberto Presta, Michal Byra
cs.AI

摘要

在神经网络剪枝领域,"彩票假说"指出大型网络中存在稀疏子网络(即中奖彩票),这些子网络可独立训练以达到原始稠密网络的性能。然而现有方法大多假设存在适用于所有输入的通用中奖彩票,忽略了现实数据固有的异质性。本研究提出"路由彩票"自适应剪枝框架,该框架能发现多个专用子网络(称为自适应彩票),每个子网络分别适配不同类别、语义簇或环境条件。在多样化数据集和任务上的实验表明,RTL在平衡准确率和召回率上持续优于单模型与多模型基线,所用参数比独立模型少10倍且呈现语义对齐特性。此外,我们发现激进剪枝会导致"子网络坍缩"现象,并提出了基于子网络相似度的无标签过稀疏化诊断方法。总体而言,本研究将剪枝重构为模型结构与数据异质性对齐的机制,为构建更具模块化和情境感知能力的深度学习模型开辟了新途径。
English
In pruning, the Lottery Ticket Hypothesis posits that large networks contain sparse subnetworks, or winning tickets, that can be trained in isolation to match the performance of their dense counterparts. However, most existing approaches assume a single universal winning ticket shared across all inputs, ignoring the inherent heterogeneity of real-world data. In this work, we propose Routing the Lottery (RTL), an adaptive pruning framework that discovers multiple specialized subnetworks, called adaptive tickets, each tailored to a class, semantic cluster, or environmental condition. Across diverse datasets and tasks, RTL consistently outperforms single- and multi-model baselines in balanced accuracy and recall, while using up to 10 times fewer parameters than independent models and exhibiting semantically aligned. Furthermore, we identify subnetwork collapse, a performance drop under aggressive pruning, and introduce a subnetwork similarity score that enables label-free diagnosis of oversparsification. Overall, our results recast pruning as a mechanism for aligning model structure with data heterogeneity, paving the way toward more modular and context-aware deep learning.
PDF22February 3, 2026