Beyond Invisibility: Learning Robust Visible Watermarks for Stronger Copyright Protection
Beyond Invisibility: Learning Robust Visible Watermarks for Stronger Copyright Protection
As AI advances, copyrighted content faces growing risk of unauthorized use, whether through model training or direct misuse. Building upon invisible adversarial perturbation, recent works developed copyright protections against specific AI techniques such as unauthorized personalization through DreamBooth that are misused. However, these methods offer only short-term security, as they require retraining whenever the underlying model architectures change. To establish long-term protection aiming at better robustness, we go beyond invisible perturbation, and propose a universal approach that embeds \textit{visible} watermarks that are \textit{hard-to-remove} into images. Grounded in a new probabilistic and inverse problem-based formulation, our framework maximizes the discrepancy between the \textit{optimal} reconstruction and the original content. We develop an effective and efficient approximation algorithm to circumvent a intractable bi-level optimization. Experimental results demonstrate superiority of our approach across diverse scenarios.
Tianci Liu、Tong Yang、Quan Zhang、Qi Lei
计算技术、计算机技术
Tianci Liu,Tong Yang,Quan Zhang,Qi Lei.Beyond Invisibility: Learning Robust Visible Watermarks for Stronger Copyright Protection[EB/OL].(2025-06-03)[2025-07-02].https://arxiv.org/abs/2506.02665.点此复制
评论