S$^2$Edit: Text-Guided Image Editing with Precise Semantic and Spatial Control
S$^2$Edit: Text-Guided Image Editing with Precise Semantic and Spatial Control
Recent advances in diffusion models have enabled high-quality generation and manipulation of images guided by texts, as well as concept learning from images. However, naive applications of existing methods to editing tasks that require fine-grained control, e.g., face editing, often lead to suboptimal solutions with identity information and high-frequency details lost during the editing process, or irrelevant image regions altered due to entangled concepts. In this work, we propose S$^2$Edit, a novel method based on a pre-trained text-to-image diffusion model that enables personalized editing with precise semantic and spatial control. We first fine-tune our model to embed the identity information into a learnable text token. During fine-tuning, we disentangle the learned identity token from attributes to be edited by enforcing an orthogonality constraint in the textual feature space. To ensure that the identity token only affects regions of interest, we apply object masks to guide the cross-attention maps. At inference time, our method performs localized editing while faithfully preserving the original identity with semantically disentangled and spatially focused identity token learned. Extensive experiments demonstrate the superiority of S$^2$Edit over state-of-the-art methods both quantitatively and qualitatively. Additionally, we showcase several compositional image editing applications of S$^2$Edit such as makeup transfer.
Xudong Liu、Zikun Chen、Ruowei Jiang、Ziyi Wu、Kejia Yin、Han Zhao、Parham Aarabi、Igor Gilitschenski
计算技术、计算机技术
Xudong Liu,Zikun Chen,Ruowei Jiang,Ziyi Wu,Kejia Yin,Han Zhao,Parham Aarabi,Igor Gilitschenski.S$^2$Edit: Text-Guided Image Editing with Precise Semantic and Spatial Control[EB/OL].(2025-07-07)[2025-07-21].https://arxiv.org/abs/2507.04584.点此复制
评论