Towards Multi-modal Graph Large Language Model
Towards Multi-modal Graph Large Language Model
Multi-modal graphs, which integrate diverse multi-modal features and relations, are ubiquitous in real-world applications. However, existing multi-modal graph learning methods are typically trained from scratch for specific graph data and tasks, failing to generalize across various multi-modal graph data and tasks. To bridge this gap, we explore the potential of Multi-modal Graph Large Language Models (MG-LLM) to unify and generalize across diverse multi-modal graph data and tasks. We propose a unified framework of multi-modal graph data, task, and model, discovering the inherent multi-granularity and multi-scale characteristics in multi-modal graphs. Specifically, we present five key desired characteristics for MG-LLM: 1) unified space for multi-modal structures and attributes, 2) capability of handling diverse multi-modal graph tasks, 3) multi-modal graph in-context learning, 4) multi-modal graph interaction with natural language, and 5) multi-modal graph reasoning. We then elaborate on the key challenges, review related works, and highlight promising future research directions towards realizing these ambitious characteristics. Finally, we summarize existing multi-modal graph datasets pertinent for model training. We believe this paper can contribute to the ongoing advancement of the research towards MG-LLM for generalization across multi-modal graph data and tasks.
Xin Wang、Zeyang Zhang、Linxin Xiao、Haibo Chen、Chendi Ge、Wenwu Zhu
计算技术、计算机技术
Xin Wang,Zeyang Zhang,Linxin Xiao,Haibo Chen,Chendi Ge,Wenwu Zhu.Towards Multi-modal Graph Large Language Model[EB/OL].(2025-06-11)[2025-06-21].https://arxiv.org/abs/2506.09738.点此复制
评论