Abstract
Since the proposal of the graph neural network (GNN) by Gori et al. (2005) and Scarselli et al. (2008), one of the major problems in training GNNs was their struggle to propagate information between distant nodes in the graph. We propose a new explanation for this problem: GNNs are susceptible to a bottleneck when aggregating messages across a long path. This bottleneck causes the over-squashing of exponentially growing information into fixed-size vectors. As a result, GNNs fail to propagate messages originating from distant nodes and perform poorly when the prediction task depends on long-range interaction. In this paper, we highlight the inherent problem of over-squashing in GNNs: we demonstrate that the bottleneck hinders popular GNNs from fitting long-range signals in the training data; we further show that GNNs that absorb incoming edges equally, such as GCN and GIN, are more susceptible to over-squashing than GAT and GGNN; finally, we show that prior work, which extensively tuned GNN models of long-range problems, suffer from over-squashing, and that breaking the bottleneck improves their state-of-the-art results without any tuning or additional weights. Our code is available at https://github.com/tech-srl/bottleneck/.
Original language | English |
---|---|
State | Published - 2021 |
Externally published | Yes |
Event | 9th International Conference on Learning Representations, ICLR 2021 - Virtual, Online Duration: 3 May 2021 → 7 May 2021 |
Conference
Conference | 9th International Conference on Learning Representations, ICLR 2021 |
---|---|
City | Virtual, Online |
Period | 3/05/21 → 7/05/21 |
Bibliographical note
Publisher Copyright:© 2021 ICLR 2021 - 9th International Conference on Learning Representations. All rights reserved.
ASJC Scopus subject areas
- Language and Linguistics
- Computer Science Applications
- Education
- Linguistics and Language