开放世界中的领域增量式终身学习
Domain Incremental Lifelong Learning in an Open World
May 11, 2023
作者: Yi Dai, Hao Lang, Yinhe Zheng, Bowen Yu, Fei Huang, Yongbin Li
cs.AI
摘要
终身学习(LL)是自然语言处理模型持续学习新任务的重要能力。基于架构的方法被认为是LL模型的有效实现。然而,将先前的方法扩展到领域增量LL场景并不简单,因为它们要么需要在测试阶段访问任务标识,要么无法处理来自未见任务的样本。在本文中,我们提出了Diana:一种基于动态架构的终身学习模型,旨在通过增强语言模型来学习一系列任务。Diana使用四种层次化组织的提示来捕获不同粒度的知识。具体而言,我们专门设计了任务级提示来捕获特定任务的知识,以保持高LL性能,并保留实例级提示来学习跨输入样本共享的知识,以提高模型的泛化性能。此外,我们专门为未见任务明确建模,并引入一组提示关键向量来促进任务之间的知识共享。大量实验证明,Diana在处理未见任务方面优于最先进的LL模型。我们在https://github.com/AlibabaResearch/DAMO-ConvAI/tree/main/diana 上发布了代码和数据。
English
Lifelong learning (LL) is an important ability for NLP models to learn new
tasks continuously. Architecture-based approaches are reported to be effective
implementations for LL models. However, it is non-trivial to extend previous
approaches to domain incremental LL scenarios since they either require access
to task identities in the testing phase or cannot handle samples from unseen
tasks. In this paper, we propose Diana: a
dynamic architecture-based
lifelong learning model that tries to learn a sequence
of tasks with a prompt-enhanced language model. Four types of hierarchically
organized prompts are used in Diana to capture knowledge from different
granularities. Specifically, we dedicate task-level prompts to capture
task-specific knowledge to retain high LL performances and maintain
instance-level prompts to learn knowledge shared across input samples to
improve the model's generalization performance. Moreover, we dedicate separate
prompts to explicitly model unseen tasks and introduce a set of prompt key
vectors to facilitate knowledge sharing between tasks. Extensive experiments
demonstrate that Diana outperforms state-of-the-art LL models, especially in
handling unseen tasks. We release the code and data at
https://github.com/AlibabaResearch/DAMO-ConvAI/tree/main/diana.