Never Skip a Batch: Continuous Training of Temporal GNNs via Adaptive Pseudo-Supervision
Never Skip a Batch: Continuous Training of Temporal GNNs via Adaptive Pseudo-Supervision
Temporal Graph Networks (TGNs), while being accurate, face significant training inefficiencies due to irregular supervision signals in dynamic graphs, which induce sparse gradient updates. We first theoretically establish that aggregating historical node interactions into pseudo-labels reduces gradient variance, accelerating convergence. Building on this analysis, we propose History-Averaged Labels (HAL), a method that dynamically enriches training batches with pseudo-targets derived from historical label distributions. HAL ensures continuous parameter updates without architectural modifications by converting idle computation into productive learning steps. Experiments on the Temporal Graph Benchmark (TGB) validate our findings and an assumption about slow change of user preferences: HAL accelerates TGNv2 training by up to 15x while maintaining competitive performance. Thus, this work offers an efficient, lightweight, architecture-agnostic, and theoretically motivated solution to label sparsity in temporal graph learning.
Alexander Panyshev、Dmitry Vinichenko、Oleg Travkin、Roman Alferov、Alexey Zaytsev
计算技术、计算机技术
Alexander Panyshev,Dmitry Vinichenko,Oleg Travkin,Roman Alferov,Alexey Zaytsev.Never Skip a Batch: Continuous Training of Temporal GNNs via Adaptive Pseudo-Supervision[EB/OL].(2025-05-18)[2025-06-09].https://arxiv.org/abs/2505.12526.点此复制
评论