ChatPaper.aiChatPaper

開放世界中的領域增量式終身學習

Domain Incremental Lifelong Learning in an Open World

May 11, 2023
作者: Yi Dai, Hao Lang, Yinhe Zheng, Bowen Yu, Fei Huang, Yongbin Li
cs.AI

摘要

終身學習(LL)是自然語言處理模型持續學習新任務的重要能力。基於架構的方法被報導為LL模型的有效實現。然而,將先前的方法擴展到領域增量LL情境並不簡單,因為它們要麼需要在測試階段訪問任務身份,要麼無法處理來自未見任務的樣本。在本文中,我們提出了Diana:一個基於動態架構的終身學習模型,試圖通過增強提示的語言模型學習一系列任務。Diana使用四種層次組織的提示來捕獲不同粒度的知識。具體而言,我們將任務級提示用於捕獲特定任務的知識,以保持高LL性能並保持實例級提示以學習跨輸入樣本共享的知識,以提高模型的泛化性能。此外,我們專門為未見任務明確建模,並引入一組提示關鍵向量以促進任務之間的知識共享。大量實驗表明,Diana在處理未見任務方面優於最先進的LL模型。我們在https://github.com/AlibabaResearch/DAMO-ConvAI/tree/main/diana 上發布了代碼和數據。
English
Lifelong learning (LL) is an important ability for NLP models to learn new tasks continuously. Architecture-based approaches are reported to be effective implementations for LL models. However, it is non-trivial to extend previous approaches to domain incremental LL scenarios since they either require access to task identities in the testing phase or cannot handle samples from unseen tasks. In this paper, we propose Diana: a dynamic architecture-based lifelong learning model that tries to learn a sequence of tasks with a prompt-enhanced language model. Four types of hierarchically organized prompts are used in Diana to capture knowledge from different granularities. Specifically, we dedicate task-level prompts to capture task-specific knowledge to retain high LL performances and maintain instance-level prompts to learn knowledge shared across input samples to improve the model's generalization performance. Moreover, we dedicate separate prompts to explicitly model unseen tasks and introduce a set of prompt key vectors to facilitate knowledge sharing between tasks. Extensive experiments demonstrate that Diana outperforms state-of-the-art LL models, especially in handling unseen tasks. We release the code and data at https://github.com/AlibabaResearch/DAMO-ConvAI/tree/main/diana.
PDF12December 15, 2024