跨语言句子嵌入的双重对齐预训练
Dual-Alignment Pre-training for Cross-lingual Sentence Embedding
May 16, 2023
作者: Ziheng Li, Shaohan Huang, Zihan Zhang, Zhi-Hong Deng, Qiang Lou, Haizhen Huang, Jian Jiao, Furu Wei, Weiwei Deng, Qi Zhang
cs.AI
摘要
最近的研究表明,使用句级别翻译排序任务训练的双编码器模型是跨语言句子嵌入的有效方法。然而,我们的研究表明,在多语境情况下,标记级别的对齐也是至关重要的,这在先前尚未得到充分探讨。根据我们的发现,我们提出了一个双对齐预训练(DAP)框架,用于跨语言句子嵌入,结合了句级别和标记级别的对齐。为实现这一目标,我们引入了一项新颖的表示翻译学习(RTL)任务,模型学习使用单侧上下文化标记表示重构其翻译对应物。这种重构目标鼓励模型将翻译信息嵌入到标记表示中。与其他标记级别对齐方法(如翻译语言建模)相比,RTL 更适用于双编码器架构,并且在计算上更有效率。对三个句级跨语言基准数据集的大量实验表明,我们的方法可以显著改善句子嵌入。我们的代码可在 https://github.com/ChillingDream/DAP 找到。
English
Recent studies have shown that dual encoder models trained with the
sentence-level translation ranking task are effective methods for cross-lingual
sentence embedding. However, our research indicates that token-level alignment
is also crucial in multilingual scenarios, which has not been fully explored
previously. Based on our findings, we propose a dual-alignment pre-training
(DAP) framework for cross-lingual sentence embedding that incorporates both
sentence-level and token-level alignment. To achieve this, we introduce a novel
representation translation learning (RTL) task, where the model learns to use
one-side contextualized token representation to reconstruct its translation
counterpart. This reconstruction objective encourages the model to embed
translation information into the token representation. Compared to other
token-level alignment methods such as translation language modeling, RTL is
more suitable for dual encoder architectures and is computationally efficient.
Extensive experiments on three sentence-level cross-lingual benchmarks
demonstrate that our approach can significantly improve sentence embedding. Our
code is available at https://github.com/ChillingDream/DAP.