Transformer遇上神經演算推理器
Transformers meet Neural Algorithmic Reasoners
June 13, 2024
作者: Wilfried Bounsi, Borja Ibarz, Andrew Dudzik, Jessica B. Hamrick, Larisa Markeeva, Alex Vitvitskyi, Razvan Pascanu, Petar Veličković
cs.AI
摘要
Transformer以其簡單而有效的架構改革了機器學習。在大量來自互聯網的文本數據集上預訓練Transformer已經帶來了無與倫比的自然語言理解(NLU)任務的泛化能力。然而,這類語言模型在需要準確和堅固的算法推理時仍然脆弱。為了解決這一限制,我們提出了一種新方法,將Transformer的語言理解能力與基於圖神經網絡(GNN)的神經算法推理器(NARs)的堅固性相結合。這種NARs在圖形形式下被證明對於算法任務是有效的通用求解器。為了使它們的嵌入可供Transformer訪問,我們提出了一種混合架構,採用兩階段訓練程序,使語言模型中的標記能夠跨越地訪問來自NAR的節點嵌入。我們在CLRS-Text上評估了我們的結果TransNAR模型,這是CLRS-30基準測試的文本版本,並證明了在算法推理方面,我們的模型在分布內外均明顯優於僅使用Transformer的模型。
English
Transformers have revolutionized machine learning with their simple yet
effective architecture. Pre-training Transformers on massive text datasets from
the Internet has led to unmatched generalization for natural language
understanding (NLU) tasks. However, such language models remain fragile when
tasked with algorithmic forms of reasoning, where computations must be precise
and robust. To address this limitation, we propose a novel approach that
combines the Transformer's language understanding with the robustness of graph
neural network (GNN)-based neural algorithmic reasoners (NARs). Such NARs
proved effective as generic solvers for algorithmic tasks, when specified in
graph form. To make their embeddings accessible to a Transformer, we propose a
hybrid architecture with a two-phase training procedure, allowing the tokens in
the language model to cross-attend to the node embeddings from the NAR. We
evaluate our resulting TransNAR model on CLRS-Text, the text-based version of
the CLRS-30 benchmark, and demonstrate significant gains over Transformer-only
models for algorithmic reasoning, both in and out of distribution.