ChatPaper.aiChatPaper

FedRE:面向模型异构联邦学习的表示纠缠框架

FedRE: A Representation Entanglement Framework for Model-Heterogeneous Federated Learning

November 27, 2025
作者: Yuan Yao, Lixu Wang, Jiaqi Wu, Jin Song, Simin Chen, Zehua Wang, Zijian Tian, Wei Chen, Huixia Li, Xiaoxiao Li
cs.AI

摘要

联邦学习(FL)能够在保护隐私的前提下实现客户端间的协同训练。尽管现有联邦学习方法大多假设采用同构模型架构,但客户端在数据和资源上的异构性使得该假设难以成立,由此催生了模型异构联邦学习。针对该问题,我们提出联邦表示纠缠(FedRE)框架,该框架基于一种称为纠缠表示的新型客户端知识形式。在FedRE中,每个客户端使用归一化随机权重将本地表示聚合成单一纠缠表示,并应用相同权重将对应的独热标签编码整合为纠缠标签编码。这些数据随后上传至服务器用于训练全局分类器。训练过程中,每个纠缠表示通过其对应的纠缠标签编码进行跨类别监督,同时每轮重新采样随机权重以引入多样性,从而缓解全局分类器的过度自信问题并促进更平滑的决策边界。此外,每个客户端仅上传单个跨类别纠缠表示及其纠缠标签编码,既降低了表示反转攻击的风险,又减少了通信开销。大量实验表明,FedRE在模型性能、隐私保护和通信开销之间实现了有效平衡。代码已发布于https://github.com/AIResearch-Group/FedRE。
English
Federated learning (FL) enables collaborative training across clients without compromising privacy. While most existing FL methods assume homogeneous model architectures, client heterogeneity in data and resources renders this assumption impractical, motivating model-heterogeneous FL. To address this problem, we propose Federated Representation Entanglement (FedRE), a framework built upon a novel form of client knowledge termed entangled representation. In FedRE, each client aggregates its local representations into a single entangled representation using normalized random weights and applies the same weights to integrate the corresponding one-hot label encodings into the entangled-label encoding. Those are then uploaded to the server to train a global classifier. During training, each entangled representation is supervised across categories via its entangled-label encoding, while random weights are resampled each round to introduce diversity, mitigating the global classifier's overconfidence and promoting smoother decision boundaries. Furthermore, each client uploads a single cross-category entangled representation along with its entangled-label encoding, mitigating the risk of representation inversion attacks and reducing communication overhead. Extensive experiments demonstrate that FedRE achieves an effective trade-off among model performance, privacy protection, and communication overhead. The codes are available at https://github.com/AIResearch-Group/FedRE.
PDF01December 2, 2025