|国家预印本平台
首页|Self-supervised ControlNet with Spatio-Temporal Mamba for Real-world Video Super-resolution

Self-supervised ControlNet with Spatio-Temporal Mamba for Real-world Video Super-resolution

Self-supervised ControlNet with Spatio-Temporal Mamba for Real-world Video Super-resolution

来源:Arxiv_logoArxiv
英文摘要

Existing diffusion-based video super-resolution (VSR) methods are susceptible to introducing complex degradations and noticeable artifacts into high-resolution videos due to their inherent randomness. In this paper, we propose a noise-robust real-world VSR framework by incorporating self-supervised learning and Mamba into pre-trained latent diffusion models. To ensure content consistency across adjacent frames, we enhance the diffusion model with a global spatio-temporal attention mechanism using the Video State-Space block with a 3D Selective Scan module, which reinforces coherence at an affordable computational cost. To further reduce artifacts in generated details, we introduce a self-supervised ControlNet that leverages HR features as guidance and employs contrastive learning to extract degradation-insensitive features from LR videos. Finally, a three-stage training strategy based on a mixture of HR-LR videos is proposed to stabilize VSR training. The proposed Self-supervised ControlNet with Spatio-Temporal Continuous Mamba based VSR algorithm achieves superior perceptual quality than state-of-the-arts on real-world VSR benchmark datasets, validating the effectiveness of the proposed model design and training strategies.

Shijun Shi、Jing Xu、Lijing Lu、Zhihang Li、Kai Hu

计算技术、计算机技术

Shijun Shi,Jing Xu,Lijing Lu,Zhihang Li,Kai Hu.Self-supervised ControlNet with Spatio-Temporal Mamba for Real-world Video Super-resolution[EB/OL].(2025-06-01)[2025-07-01].https://arxiv.org/abs/2506.01037.点此复制

评论