|国家预印本平台
首页|Diffuse Everything: Multimodal Diffusion Models on Arbitrary State Spaces

Diffuse Everything: Multimodal Diffusion Models on Arbitrary State Spaces

Diffuse Everything: Multimodal Diffusion Models on Arbitrary State Spaces

来源:Arxiv_logoArxiv
英文摘要

Diffusion models have demonstrated remarkable performance in generating unimodal data across various tasks, including image, video, and text generation. On the contrary, the joint generation of multimodal data through diffusion models is still in the early stages of exploration. Existing approaches heavily rely on external preprocessing protocols, such as tokenizers and variational autoencoders, to harmonize varied data representations into a unified, unimodal format. This process heavily demands the high accuracy of encoders and decoders, which can be problematic for applications with limited data. To lift this restriction, we propose a novel framework for building multimodal diffusion models on arbitrary state spaces, enabling native generation of coupled data across different modalities. By introducing an innovative decoupled noise schedule for each modality, we enable both unconditional and modality-conditioned generation within a single model simultaneously. We empirically validate our approach for text-image generation and mixed-type tabular data synthesis, demonstrating that it achieves competitive performance.

Kevin Rojas、Yuchen Zhu、Sichen Zhu、Felix X. -F. Ye、Molei Tao

计算技术、计算机技术

Kevin Rojas,Yuchen Zhu,Sichen Zhu,Felix X. -F. Ye,Molei Tao.Diffuse Everything: Multimodal Diffusion Models on Arbitrary State Spaces[EB/OL].(2025-06-09)[2025-07-09].https://arxiv.org/abs/2506.07903.点此复制

评论