Grounding Degradations in Natural Language for All-In-One Video Restoration
Grounding Degradations in Natural Language for All-In-One Video Restoration
In this work, we propose an all-in-one video restoration framework that grounds degradation-aware semantic context of video frames in natural language via foundation models, offering interpretable and flexible guidance. Unlike prior art, our method assumes no degradation knowledge in train or test time and learns an approximation to the grounded knowledge such that the foundation model can be safely disentangled during inference adding no extra cost. Further, we call for standardization of benchmarks in all-in-one video restoration, and propose two benchmarks in multi-degradation setting, three-task (3D) and four-task (4D), and two time-varying composite degradation benchmarks; one of the latter being our proposed dataset with varying snow intensity, simulating how weather degradations affect videos naturally. We compare our method with prior works and report state-of-the-art performance on all benchmarks.
Muhammad Kamran Janjua、Amirhosein Ghasemabadi、Kunlin Zhang、Mohammad Salameh、Chao Gao、Di Niu
计算技术、计算机技术
Muhammad Kamran Janjua,Amirhosein Ghasemabadi,Kunlin Zhang,Mohammad Salameh,Chao Gao,Di Niu.Grounding Degradations in Natural Language for All-In-One Video Restoration[EB/OL].(2025-07-20)[2025-08-10].https://arxiv.org/abs/2507.14851.点此复制
评论