Can Graph Neural Networks Go Deeper Without Over-Smoothing? Yes, With a Randomized Path Exploration!

Article Type

Research Article

Publication Title

IEEE Transactions on Emerging Topics in Computational Intelligence

Abstract

Graph Neural Networks (GNNs) have emerged as one of the most powerful approaches for learning on graph-structured data, even though they are mostly restricted to being shallow in nature. This is because node features tend to become indistinguishable when multiple layers are stacked together. This phenomenon is known as over-smoothing. This paper identifies two core properties of the aggregation approaches that may act as primary causes for over-smoothing. These properties are namely recursiveness and aggregation from higher to lower-order neighborhoods. Thus, we attempt to address the over-smoothing issue by proposing a novel aggregation strategy that is orthogonal to the other existing approaches. In essence, the proposed aggregation strategy combines features from lower to higher-order neighborhoods in a non-recursive way by employing a randomized path exploration approach. The efficacy of our aggregation method is verified through an extensive comparative study on the benchmark datasets w.r.t. the state-of-the-art techniques on semi-supervised and fully-supervised learning tasks.

First Page

1595

Last Page

1604

DOI

https://10.1109/TETCI.2023.3249255

Publication Date

10-1-2023

This document is currently not available here.

Share

COinS