WeGen: A Unified Model for Interactive Multimodal Generation as We Chat
WeGen: A Unified Model for Interactive Multimodal Generation as We Chat
Existing multimodal generative models fall short as qualified design copilots, as they often struggle to generate imaginative outputs once instructions are less detailed or lack the ability to maintain consistency with the provided references. In this work, we introduce WeGen, a model that unifies multimodal generation and understanding, and promotes their interplay in iterative generation. It can generate diverse results with high creativity for less detailed instructions. And it can progressively refine prior generation results or integrating specific contents from references following the instructions in its chat with users. During this process, it is capable of preserving consistency in the parts that the user is already satisfied with. To this end, we curate a large-scale dataset, extracted from Internet videos, containing rich object dynamics and auto-labeled dynamics descriptions by advanced foundation models to date. These two information are interleaved into a single sequence to enable WeGen to learn consistency-aware generation where the specified dynamics are generated while the consistency of unspecified content is preserved aligned with instructions. Besides, we introduce a prompt self-rewriting mechanism to enhance generation diversity. Extensive experiments demonstrate the effectiveness of unifying multimodal understanding and generation in WeGen and show it achieves state-of-the-art performance across various visual generation benchmarks. These also demonstrate the potential of WeGen as a user-friendly design copilot as desired. The code and models will be available at https://github.com/hzphzp/WeGen.
Yali Wang、Chong Sun、Chen Li、Binxin Yang、Zheng-Jun Zha、Ying Zhang、Zhizheng Zhang、Shaobin Zhuang、Zhipeng Huang、Canmiao Fu
计算技术、计算机技术
Yali Wang,Chong Sun,Chen Li,Binxin Yang,Zheng-Jun Zha,Ying Zhang,Zhizheng Zhang,Shaobin Zhuang,Zhipeng Huang,Canmiao Fu.WeGen: A Unified Model for Interactive Multimodal Generation as We Chat[EB/OL].(2025-03-02)[2025-04-24].https://arxiv.org/abs/2503.01115.点此复制
评论