|国家预印本平台
首页|Towards LLM-Centric Multimodal Fusion: A Survey on Integration Strategies and Techniques

Towards LLM-Centric Multimodal Fusion: A Survey on Integration Strategies and Techniques

Towards LLM-Centric Multimodal Fusion: A Survey on Integration Strategies and Techniques

来源:Arxiv_logoArxiv
英文摘要

The rapid progress of Multimodal Large Language Models(MLLMs) has transformed the AI landscape. These models combine pre-trained LLMs with various modality encoders. This integration requires a systematic understanding of how different modalities connect to the language backbone. Our survey presents an LLM-centric analysis of current approaches. We examine methods for transforming and aligning diverse modal inputs into the language embedding space. This addresses a significant gap in existing literature. We propose a classification framework for MLLMs based on three key dimensions. First, we examine architectural strategies for modality integration. This includes both the specific integration mechanisms and the fusion level. Second, we categorize representation learning techniques as either joint or coordinate representations. Third, we analyze training paradigms, including training strategies and objective functions. By examining 125 MLLMs developed between 2021 and 2025, we identify emerging patterns in the field. Our taxonomy provides researchers with a structured overview of current integration techniques. These insights aim to guide the development of more robust multimodal integration strategies for future models built on pre-trained foundations.

Jisu An、Junseok Lee、Jeoungeun Lee、Yongseok Son

计算技术、计算机技术

Jisu An,Junseok Lee,Jeoungeun Lee,Yongseok Son.Towards LLM-Centric Multimodal Fusion: A Survey on Integration Strategies and Techniques[EB/OL].(2025-06-05)[2025-07-18].https://arxiv.org/abs/2506.04788.点此复制

评论