Integrating Structural and Semantic Signals in Text-Attributed Graphs with BiGTex
Integrating Structural and Semantic Signals in Text-Attributed Graphs with BiGTex
Text-attributed graphs (TAGs) present unique challenges in representation learning by requiring models to capture both the semantic richness of node-associated texts and the structural dependencies of the graph. While graph neural networks (GNNs) excel at modeling topological information, they lack the capacity to process unstructured text. Conversely, large language models (LLMs) are proficient in text understanding but are typically unaware of graph structure. In this work, we propose BiGTex (Bidirectional Graph Text), a novel architecture that tightly integrates GNNs and LLMs through stacked Graph-Text Fusion Units. Each unit allows for mutual attention between textual and structural representations, enabling information to flow in both directions, text influencing structure and structure guiding textual interpretation. The proposed architecture is trained using parameter-efficient fine-tuning (LoRA), keeping the LLM frozen while adapting to task-specific signals. Extensive experiments on five benchmark datasets demonstrate that BiGTex achieves state-of-the-art performance in node classification and generalizes effectively to link prediction. An ablation study further highlights the importance of soft prompting and bi-directional attention in the model's success.
Azadeh Beiranvand、Seyed Mehdi Vahidipour
计算技术、计算机技术
Azadeh Beiranvand,Seyed Mehdi Vahidipour.Integrating Structural and Semantic Signals in Text-Attributed Graphs with BiGTex[EB/OL].(2025-04-16)[2025-04-27].https://arxiv.org/abs/2504.12474.点此复制
评论