ChatPaper.aiChatPaper

X-節點:自我解釋即為所需

X-Node: Self-Explanation is All We Need

August 14, 2025
作者: Prajit Sengupta, Islem Rekik
cs.AI

摘要

圖神經網絡(GNNs)通過捕捉數據實例間的結構依賴性,在計算機視覺和醫學影像分類任務中取得了頂尖的成果。然而,其決策過程仍大多不透明,這限制了其在需要高解釋性的關鍵臨床應用中的可信度。現有的GNN可解釋性技術通常為事後且全局性的,對單個節點決策或局部推理的洞察有限。我們提出了X-Node,這是一種自解釋的GNN框架,其中每個節點在預測過程中生成自己的解釋。對於每個節點,我們構建了一個結構化的上下文向量,編碼了其局部拓撲中的可解釋線索,如度數、中心性、聚類、特徵顯著性和標籤一致性。一個輕量級的推理模塊將此上下文映射為一個緊湊的解釋向量,該向量具有三個目的:(1) 通過解碼器重建節點的潛在嵌入以確保忠實性,(2) 使用預訓練的大型語言模型(如Grok或Gemini)生成自然語言解釋,以及(3) 通過“文本注入”機制將解釋反饋到消息傳遞管道中,從而引導GNN本身。我們在源自MedMNIST和MorphoMNIST的兩個圖數據集上評估了X-Node,並將其與GCN、GAT和GIN骨幹網絡集成。結果顯示,X-Node在保持競爭性分類精度的同時,能夠生成忠實的、針對每個節點的解釋。代碼庫:https://github.com/basiralab/X-Node。
English
Graph neural networks (GNNs) have achieved state-of-the-art results in computer vision and medical image classification tasks by capturing structural dependencies across data instances. However, their decision-making remains largely opaque, limiting their trustworthiness in high-stakes clinical applications where interpretability is essential. Existing explainability techniques for GNNs are typically post-hoc and global, offering limited insight into individual node decisions or local reasoning. We introduce X-Node, a self-explaining GNN framework in which each node generates its own explanation as part of the prediction process. For every node, we construct a structured context vector encoding interpretable cues such as degree, centrality, clustering, feature saliency, and label agreement within its local topology. A lightweight Reasoner module maps this context into a compact explanation vector, which serves three purposes: (1) reconstructing the node's latent embedding via a decoder to enforce faithfulness, (2) generating a natural language explanation using a pre-trained LLM (e.g., Grok or Gemini), and (3) guiding the GNN itself via a "text-injection" mechanism that feeds explanations back into the message-passing pipeline. We evaluate X-Node on two graph datasets derived from MedMNIST and MorphoMNIST, integrating it with GCN, GAT, and GIN backbones. Our results show that X-Node maintains competitive classification accuracy while producing faithful, per-node explanations. Repository: https://github.com/basiralab/X-Node.
PDF62August 18, 2025