|国家预印本平台
首页|Unsupervised Prompting for Graph Neural Networks

Unsupervised Prompting for Graph Neural Networks

Unsupervised Prompting for Graph Neural Networks

来源:Arxiv_logoArxiv
英文摘要

Prompt tuning methods for Graph Neural Networks (GNNs) have become popular to address the semantic gap between pre-training and fine-tuning steps. However, existing GNN prompting methods rely on labeled data and involve lightweight fine-tuning for downstream tasks. Meanwhile, in-context learning methods for Large Language Models (LLMs) have shown promising performance with no parameter updating and no or minimal labeled data. Inspired by these approaches, in this work, we first introduce a challenging problem setup to evaluate GNN prompting methods. This setup encourages a prompting function to enhance a pre-trained GNN's generalization to a target dataset under covariate shift without updating the GNN's parameters and with no labeled data. Next, we propose a fully unsupervised prompting method based on consistency regularization through pseudo-labeling. We use two regularization techniques to align the prompted graphs' distribution with the original data and reduce biased predictions. Through extensive experiments under our problem setting, we demonstrate that our unsupervised approach outperforms the state-of-the-art prompting methods that have access to labels.

Peyman Baghershahi、Sourav Medya

计算技术、计算机技术

Peyman Baghershahi,Sourav Medya.Unsupervised Prompting for Graph Neural Networks[EB/OL].(2025-05-22)[2025-06-13].https://arxiv.org/abs/2505.16903.点此复制

评论