Towards Language-Driven Video Inpainting via Multimodal Large Language Models
Towards Language-Driven Video Inpainting via Multimodal Large Language Models
We introduce a new task -- language-driven video inpainting, which uses natural language instructions to guide the inpainting process. This approach overcomes the limitations of traditional video inpainting methods that depend on manually labeled binary masks, a process often tedious and labor-intensive. We present the Remove Objects from Videos by Instructions (ROVI) dataset, containing 5,650 videos and 9,091 inpainting results, to support training and evaluation for this task. We also propose a novel diffusion-based language-driven video inpainting framework, the first end-to-end baseline for this task, integrating Multimodal Large Language Models to understand and execute complex language-based inpainting requests effectively. Our comprehensive results showcase the dataset's versatility and the model's effectiveness in various language-instructed inpainting scenarios. We will make datasets, code, and models publicly available.
Yunhai Tong、Yining Li、Ziwei Liu、Jingkang Yang、Chenyang Si、Jiangning Zhang、Shangchen Zhou、Xiangtai Li、Kai Chen、Jianzong Wu、Chen Change Loy
计算技术、计算机技术
Yunhai Tong,Yining Li,Ziwei Liu,Jingkang Yang,Chenyang Si,Jiangning Zhang,Shangchen Zhou,Xiangtai Li,Kai Chen,Jianzong Wu,Chen Change Loy.Towards Language-Driven Video Inpainting via Multimodal Large Language Models[EB/OL].(2024-01-18)[2025-05-01].https://arxiv.org/abs/2401.10226.点此复制
评论