Addressing Text Embedding Leakage in Diffusion-based Image Editing
Addressing Text Embedding Leakage in Diffusion-based Image Editing
Text-based image editing, powered by generative diffusion models, lets users modify images through natural-language prompts and has dramatically simplified traditional workflows. Despite these advances, current methods still suffer from a critical problem: attribute leakage, where edits meant for specific objects unintentionally affect unrelated regions or other target objects. Our analysis reveals the root cause as the semantic entanglement inherent in End-of-Sequence (EOS) embeddings generated by autoregressive text encoders, which indiscriminately aggregate attributes across prompts. To address this issue, we introduce Attribute-Leakage-free Editing (ALE), a framework that tackles attribute leakage at its source. ALE combines Object-Restricted Embeddings (ORE) to disentangle text embeddings, Region-Guided Blending for Cross-Attention Masking (RGB-CAM) for spatially precise attention, and Background Blending (BB) to preserve non-edited content. To quantitatively evaluate attribute leakage across various editing methods, we propose the Attribute-Leakage Evaluation Benchmark (ALE-Bench), featuring comprehensive editing scenarios and new metrics. Extensive experiments show that ALE reduces attribute leakage by large margins, thereby enabling accurate, multi-object, text-driven image editing while faithfully preserving non-target content.
Sunung Mun、Sunghyun Cho、Jungseul Ok、Jinhwan Nam
计算技术、计算机技术
Sunung Mun,Sunghyun Cho,Jungseul Ok,Jinhwan Nam.Addressing Text Embedding Leakage in Diffusion-based Image Editing[EB/OL].(2025-08-25)[2025-09-05].https://arxiv.org/abs/2412.04715.点此复制
评论