|国家预印本平台
首页|Towards Deeper GCNs: Alleviating Over-smoothing via Iterative Training and Fine-tuning

Towards Deeper GCNs: Alleviating Over-smoothing via Iterative Training and Fine-tuning

Towards Deeper GCNs: Alleviating Over-smoothing via Iterative Training and Fine-tuning

来源:Arxiv_logoArxiv
英文摘要

Graph Convolutional Networks (GCNs) suffer from severe performance degradation in deep architectures due to over-smoothing. While existing studies primarily attribute the over-smoothing to repeated applications of graph Laplacian operators, our empirical analysis reveals a critical yet overlooked factor: trainable linear transformations in GCNs significantly exacerbate feature collapse, even at moderate depths (e.g., 8 layers). In contrast, Simplified Graph Convolution (SGC), which removes these transformations, maintains stable feature diversity up to 32 layers, highlighting linear transformations' dual role in facilitating expressive power and inducing over-smoothing. However, completely removing linear transformations weakens the model's expressive capacity. To address this trade-off, we propose Layer-wise Gradual Training (LGT), a novel training strategy that progressively builds deep GCNs while preserving their expressiveness. LGT integrates three complementary components: (1) layer-wise training to stabilize optimization from shallow to deep layers, (2) low-rank adaptation to fine-tune shallow layers and accelerate training, and (3) identity initialization to ensure smooth integration of new layers and accelerate convergence. Extensive experiments on benchmark datasets demonstrate that LGT achieves state-of-the-art performance on vanilla GCN, significantly improving accuracy even in 32-layer settings. Moreover, as a training method, LGT can be seamlessly combined with existing methods such as PairNorm and ContraNorm, further enhancing their performance in deeper networks. LGT offers a general, architecture-agnostic training framework for scalable deep GCNs. The code is available at [https://github.com/jfklasdfj/LGT_GCN].

Furong Peng、Jinzhen Gao、Xuan Lu、Kang Liu、Yifan Huo、Sheng Wang

计算技术、计算机技术

Furong Peng,Jinzhen Gao,Xuan Lu,Kang Liu,Yifan Huo,Sheng Wang.Towards Deeper GCNs: Alleviating Over-smoothing via Iterative Training and Fine-tuning[EB/OL].(2025-06-21)[2025-07-16].https://arxiv.org/abs/2506.17576.点此复制

评论