ChatPaper.aiChatPaper

ZJUKLAB在SemEval-2025任務4中的表現:通過模型合併實現遺忘學習

ZJUKLAB at SemEval-2025 Task 4: Unlearning via Model Merging

March 27, 2025
作者: Haoming Xu, Shuxun Wang, Yanqiu Zhao, Yi Zhong, Ziyan Jiang, Ningyuan Zhao, Shumin Deng, Huajun Chen, Ningyu Zhang
cs.AI

摘要

本論文介紹了ZJUKLAB團隊針對SemEval-2025任務4:從大型語言模型中消除敏感內容的參賽方案。該任務旨在選擇性地從大型語言模型中刪除敏感知識,避免過度遺忘和遺忘不足的問題。我們提出了一種利用模型合併(特別是TIES-Merging)的消除學習系統,將兩個專用模型合併為一個更平衡的消除學習模型。我們的系統取得了優異的成績,在26支參賽隊伍中排名第二,任務聚合得分為0.944,總體聚合得分為0.487。在本文中,我們還進行了本地實驗,並對消除學習過程進行了全面分析,包括性能軌跡、損失動態和權重視角,以及多項補充實驗,以理解我們方法的有效性。此外,我們分析了我們方法和評估指標的不足,強調僅靠MIA分數和基於ROUGE的指標不足以全面評估成功的消除學習。最後,我們強調了在未來研究中需要更全面的評估方法和重新思考消除學習目標的必要性。代碼可在https://github.com/zjunlp/unlearn/tree/main/semeval25獲取。
English
This paper presents the ZJUKLAB team's submission for SemEval-2025 Task 4: Unlearning Sensitive Content from Large Language Models. This task aims to selectively erase sensitive knowledge from large language models, avoiding both over-forgetting and under-forgetting issues. We propose an unlearning system that leverages Model Merging (specifically TIES-Merging), combining two specialized models into a more balanced unlearned model. Our system achieves competitive results, ranking second among 26 teams, with an online score of 0.944 for Task Aggregate and 0.487 for overall Aggregate. In this paper, we also conduct local experiments and perform a comprehensive analysis of the unlearning process, examining performance trajectories, loss dynamics, and weight perspectives, along with several supplementary experiments, to understand the effectiveness of our method. Furthermore, we analyze the shortcomings of our method and evaluation metrics, emphasizing that MIA scores and ROUGE-based metrics alone are insufficient to fully evaluate successful unlearning. Finally, we emphasize the need for more comprehensive evaluation methodologies and rethinking of unlearning objectives in future research. Code is available at https://github.com/zjunlp/unlearn/tree/main/semeval25.

Summary

AI-Generated Summary

PDF72March 28, 2025