|国家预印本平台
首页|A Survey of Generative Categories and Techniques in Multimodal Large Language Models

A Survey of Generative Categories and Techniques in Multimodal Large Language Models

A Survey of Generative Categories and Techniques in Multimodal Large Language Models

来源:Arxiv_logoArxiv
英文摘要

Multimodal Large Language Models (MLLMs) have rapidly evolved beyond text generation, now spanning diverse output modalities including images, music, video, human motion, and 3D objects, by integrating language with other sensory modalities under unified architectures. This survey categorises six primary generative modalities and examines how foundational techniques, namely Self-Supervised Learning (SSL), Mixture of Experts (MoE), Reinforcement Learning from Human Feedback (RLHF), and Chain-of-Thought (CoT) prompting, enable cross-modal capabilities. We analyze key models, architectural trends, and emergent cross-modal synergies, while highlighting transferable techniques and unresolved challenges. Architectural innovations like transformers and diffusion models underpin this convergence, enabling cross-modal transfer and modular specialization. We highlight emerging patterns of synergy, and identify open challenges in evaluation, modularity, and structured reasoning. This survey offers a unified perspective on MLLM development and identifies critical paths toward more general-purpose, adaptive, and interpretable multimodal systems.

Longzhen Han、Awes Mubarak、Almas Baimagambetov、Nikolaos Polatidis、Thar Baker

计算技术、计算机技术

Longzhen Han,Awes Mubarak,Almas Baimagambetov,Nikolaos Polatidis,Thar Baker.A Survey of Generative Categories and Techniques in Multimodal Large Language Models[EB/OL].(2025-05-29)[2025-07-16].https://arxiv.org/abs/2506.10016.点此复制

评论