A Unified Multi-Agent Framework for Universal Multimodal Understanding and Generation
A Unified Multi-Agent Framework for Universal Multimodal Understanding and Generation
Real-world multimodal applications often require any-to-any capabilities, enabling both understanding and generation across modalities including text, image, audio, and video. However, integrating the strengths of autoregressive language models (LLMs) for reasoning and diffusion models for high-fidelity generation remains challenging. Existing approaches rely on rigid pipelines or tightly coupled architectures, limiting flexibility and scalability. We propose MAGUS (Multi-Agent Guided Unified Multimodal System), a modular framework that unifies multimodal understanding and generation via two decoupled phases: Cognition and Deliberation. MAGUS enables symbolic multi-agent collaboration within a shared textual workspace. In the Cognition phase, three role-conditioned multimodal LLM agents - Perceiver, Planner, and Reflector - engage in collaborative dialogue to perform structured understanding and planning. The Deliberation phase incorporates a Growth-Aware Search mechanism that orchestrates LLM-based reasoning and diffusion-based generation in a mutually reinforcing manner. MAGUS supports plug-and-play extensibility, scalable any-to-any modality conversion, and semantic alignment - all without the need for joint training. Experiments across multiple benchmarks, including image, video, and audio generation, as well as cross-modal instruction following, demonstrate that MAGUS outperforms strong baselines and state-of-the-art systems. Notably, on the MME benchmark, MAGUS surpasses the powerful closed-source model GPT-4o.
Jiulin Li、Ping Huang、Yexin Li、Shuo Chen、Juewen Hu、Ye Tian
计算技术、计算机技术
Jiulin Li,Ping Huang,Yexin Li,Shuo Chen,Juewen Hu,Ye Tian.A Unified Multi-Agent Framework for Universal Multimodal Understanding and Generation[EB/OL].(2025-08-14)[2025-08-24].https://arxiv.org/abs/2508.10494.点此复制
评论