Graphon-Explainer: Generating Model-Level Explanations for Graph Neural Networks using Graphons
Article Type
Research Article
Publication Title
Transactions on Machine Learning Research
Abstract
Graph Neural Networks (GNNs) form the backbone of several state-of-the-art methods for performing machine learning tasks on graphs. As GNNs find application across diverse real-world scenarios, ensuring their interpretability and reliability becomes imperative. In this paper, we propose Graphon-Explainer, a model-level explanation method to elucidate the high-level decision-making process of a GNN. Graphon-Explainer learns a graphon—a symmetric, continuous function viewed as a weighted adjacency matrix of an infinitely large graph—to approximate the distribution of a target class as learned by the GNN. The learned graphon then acts as a generative model, yielding distinct graph motifs deemed significant by the GNN for the target class. Unlike existing model-level explanation methods for GNNs, which are limited to explaining a GNN for individual target classes, Graphon-Explainer can also generate synthetic graphs close to the decision boundary between two target classes by interpolating graphons of both classes, aiding in characterizing the GNN model’s decision boundary. Furthermore, Graphon-Explainer is model-agnostic, does not rely on additional black-box models, and does not require manually specified handcrafted constraints for explanation generation. The effectiveness of our method is validated through thorough theoretical analysis and extensive experimentation on both synthetic and real-world datasets on the task of graph classification. Results demonstrate its capability to effectively learn and generate diverse graph patterns identified by a trained GNN, thus enhancing its interpretability for end-users.
Publication Date
1-1-2024
Recommended Citation
Saha, Sayan and Bandyopadhyay, Sanghamitra, "Graphon-Explainer: Generating Model-Level Explanations for Graph Neural Networks using Graphons" (2024). Journal Articles. 4815.
https://digitalcommons.isical.ac.in/journal-articles/4815