|国家预印本平台
首页|VINCIE: Unlocking In-context Image Editing from Video

VINCIE: Unlocking In-context Image Editing from Video

VINCIE: Unlocking In-context Image Editing from Video

来源:Arxiv_logoArxiv
英文摘要

In-context image editing aims to modify images based on a contextual sequence comprising text and previously generated images. Existing methods typically depend on task-specific pipelines and expert models (e.g., segmentation and inpainting) to curate training data. In this work, we explore whether an in-context image editing model can be learned directly from videos. We introduce a scalable approach to annotate videos as interleaved multimodal sequences. To effectively learn from this data, we design a block-causal diffusion transformer trained on three proxy tasks: next-image prediction, current segmentation prediction, and next-segmentation prediction. Additionally, we propose a novel multi-turn image editing benchmark to advance research in this area. Extensive experiments demonstrate that our model exhibits strong in-context image editing capabilities and achieves state-of-the-art results on two multi-turn image editing benchmarks. Despite being trained exclusively on videos, our model also shows promising abilities in multi-concept composition, story generation, and chain-of-editing applications.

Leigang Qu、Feng Cheng、Ziyan Yang、Qi Zhao、Shanchuan Lin、Yichun Shi、Yicong Li、Wenjie Wang、Tat-Seng Chua、Lu Jiang

计算技术、计算机技术

Leigang Qu,Feng Cheng,Ziyan Yang,Qi Zhao,Shanchuan Lin,Yichun Shi,Yicong Li,Wenjie Wang,Tat-Seng Chua,Lu Jiang.VINCIE: Unlocking In-context Image Editing from Video[EB/OL].(2025-06-12)[2025-07-22].https://arxiv.org/abs/2506.10941.点此复制

评论