|国家预印本平台
首页|Interpreting CNN for Low Complexity Learned Sub-pixel Motion Compensation in Video Coding

Interpreting CNN for Low Complexity Learned Sub-pixel Motion Compensation in Video Coding

Interpreting CNN for Low Complexity Learned Sub-pixel Motion Compensation in Video Coding

来源:Arxiv_logoArxiv
英文摘要

Deep learning has shown great potential in image and video compression tasks. However, it brings bit savings at the cost of significant increases in coding complexity, which limits its potential for implementation within practical applications. In this paper, a novel neural network-based tool is presented which improves the interpolation of reference samples needed for fractional precision motion compensation. Contrary to previous efforts, the proposed approach focuses on complexity reduction achieved by interpreting the interpolation filters learned by the networks. When the approach is implemented in the Versatile Video Coding (VVC) test model, up to 4.5% BD-rate saving for individual sequences is achieved compared with the baseline VVC, while the complexity of learned interpolation is significantly reduced compared to the application of full neural network.

Noel E. O'Connor、Saverio Blasi、Marta Mrak、Luka Murn、Alan F. Smeaton

10.1109/ICIP40778.2020.9191193

计算技术、计算机技术电子技术应用通信

Noel E. O'Connor,Saverio Blasi,Marta Mrak,Luka Murn,Alan F. Smeaton.Interpreting CNN for Low Complexity Learned Sub-pixel Motion Compensation in Video Coding[EB/OL].(2020-06-11)[2025-08-18].https://arxiv.org/abs/2006.06392.点此复制

评论