ChatPaper.aiChatPaper

X-节点:自解释即是我们所需

X-Node: Self-Explanation is All We Need

August 14, 2025
作者: Prajit Sengupta, Islem Rekik
cs.AI

摘要

图神经网络(GNNs)通过捕捉数据实例间的结构依赖关系,在计算机视觉和医学图像分类任务中取得了顶尖成果。然而,其决策过程仍大多不透明,这在可解释性至关重要的高风险临床应用中限制了其可信度。现有的GNN可解释性技术通常为事后全局分析,对单个节点决策或局部推理的洞察有限。我们提出了X-Node,一种自解释的GNN框架,其中每个节点在预测过程中生成自身的解释。对于每个节点,我们构建了一个结构化上下文向量,编码其局部拓扑中的可解释线索,如度数、中心性、聚类、特征显著性和标签一致性。一个轻量级的推理模块将此上下文映射为紧凑的解释向量,该向量服务于三个目的:(1) 通过解码器重建节点的潜在嵌入,以确保忠实性;(2) 使用预训练的大型语言模型(如Grok或Gemini)生成自然语言解释;(3) 通过“文本注入”机制将解释反馈回消息传递管道,从而指导GNN本身。我们在源自MedMNIST和MorphoMNIST的两个图数据集上评估了X-Node,并将其与GCN、GAT和GIN骨干网络集成。结果表明,X-Node在保持竞争力的分类准确性的同时,能生成忠实于每个节点的解释。项目仓库:https://github.com/basiralab/X-Node。
English
Graph neural networks (GNNs) have achieved state-of-the-art results in computer vision and medical image classification tasks by capturing structural dependencies across data instances. However, their decision-making remains largely opaque, limiting their trustworthiness in high-stakes clinical applications where interpretability is essential. Existing explainability techniques for GNNs are typically post-hoc and global, offering limited insight into individual node decisions or local reasoning. We introduce X-Node, a self-explaining GNN framework in which each node generates its own explanation as part of the prediction process. For every node, we construct a structured context vector encoding interpretable cues such as degree, centrality, clustering, feature saliency, and label agreement within its local topology. A lightweight Reasoner module maps this context into a compact explanation vector, which serves three purposes: (1) reconstructing the node's latent embedding via a decoder to enforce faithfulness, (2) generating a natural language explanation using a pre-trained LLM (e.g., Grok or Gemini), and (3) guiding the GNN itself via a "text-injection" mechanism that feeds explanations back into the message-passing pipeline. We evaluate X-Node on two graph datasets derived from MedMNIST and MorphoMNIST, integrating it with GCN, GAT, and GIN backbones. Our results show that X-Node maintains competitive classification accuracy while producing faithful, per-node explanations. Repository: https://github.com/basiralab/X-Node.
PDF62August 18, 2025