|国家预印本平台
首页|Structured Sparsity Learning for Efficient Video Super-Resolution

Structured Sparsity Learning for Efficient Video Super-Resolution

Structured Sparsity Learning for Efficient Video Super-Resolution

来源:Arxiv_logoArxiv
英文摘要

The high computational costs of video super-resolution (VSR) models hinder their deployment on resource-limited devices, (e.g., smartphones and drones). Existing VSR models contain considerable redundant filters, which drag down the inference efficiency. To prune these unimportant filters, we develop a structured pruning scheme called Structured Sparsity Learning (SSL) according to the properties of VSR. In SSL, we design pruning schemes for several key components in VSR models, including residual blocks, recurrent networks, and upsampling networks. Specifically, we develop a Residual Sparsity Connection (RSC) scheme for residual blocks of recurrent networks to liberate pruning restrictions and preserve the restoration information. For upsampling networks, we design a pixel-shuffle pruning scheme to guarantee the accuracy of feature channel-space conversion. In addition, we observe that pruning error would be amplified as the hidden states propagate along with recurrent networks. To alleviate the issue, we design Temporal Finetuning (TF). Extensive experiments show that SSL can significantly outperform recent methods quantitatively and qualitatively.

Yulun Zhang、Jingwen He、Yitong Wang、Luc Van Gool、Yapeng Tian、Bin Xia、Wenming Yang

计算技术、计算机技术电子技术应用

Yulun Zhang,Jingwen He,Yitong Wang,Luc Van Gool,Yapeng Tian,Bin Xia,Wenming Yang.Structured Sparsity Learning for Efficient Video Super-Resolution[EB/OL].(2022-06-15)[2025-06-24].https://arxiv.org/abs/2206.07687.点此复制

评论