|国家预印本平台
首页|Understanding the Robustness of Graph Neural Networks against Adversarial Attacks

Understanding the Robustness of Graph Neural Networks against Adversarial Attacks

Understanding the Robustness of Graph Neural Networks against Adversarial Attacks

来源:Arxiv_logoArxiv
英文摘要

Recent studies have shown that graph neural networks (GNNs) are vulnerable to adversarial attacks, posing significant challenges to their deployment in safety-critical scenarios. This vulnerability has spurred a growing focus on designing robust GNNs. Despite this interest, current advancements have predominantly relied on empirical trial and error, resulting in a limited understanding of the robustness of GNNs against adversarial attacks. To address this issue, we conduct the first large-scale systematic study on the adversarial robustness of GNNs by considering the patterns of input graphs, the architecture of GNNs, and their model capacity, along with discussions on sensitive neurons and adversarial transferability. This work proposes a comprehensive empirical framework for analyzing the adversarial robustness of GNNs. To support the analysis of adversarial robustness in GNNs, we introduce two evaluation metrics: the confidence-based decision surface and the accuracy-based adversarial transferability rate. Through experimental analysis, we derive 11 actionable guidelines for designing robust GNNs, enabling model developers to gain deeper insights. The code of this study is available at https://github.com/star4455/GraphRE.

Chao Wang、Lin Yuan、Shui Yu、Tao Wu、Canyixing Cui、Xingping Xian、Shaojie Qiao

计算技术、计算机技术

Chao Wang,Lin Yuan,Shui Yu,Tao Wu,Canyixing Cui,Xingping Xian,Shaojie Qiao.Understanding the Robustness of Graph Neural Networks against Adversarial Attacks[EB/OL].(2024-06-19)[2025-07-23].https://arxiv.org/abs/2406.13920.点此复制

评论