|国家预印本平台
首页|RecConv: Efficient Recursive Convolutions for Multi-Frequency Representations

RecConv: Efficient Recursive Convolutions for Multi-Frequency Representations

RecConv: Efficient Recursive Convolutions for Multi-Frequency Representations

来源:Arxiv_logoArxiv
英文摘要

Recent advances in vision transformers (ViTs) have demonstrated the advantage of global modeling capabilities, prompting widespread integration of large-kernel convolutions for enlarging the effective receptive field (ERF). However, the quadratic scaling of parameter count and computational complexity (FLOPs) with respect to kernel size poses significant efficiency and optimization challenges. This paper introduces RecConv, a recursive decomposition strategy that efficiently constructs multi-frequency representations using small-kernel convolutions. RecConv establishes a linear relationship between parameter growth and decomposing levels which determines the effective receptive field $k\times 2^\ell$ for a base kernel $k$ and $\ell$ levels of decomposition, while maintaining constant FLOPs regardless of the ERF expansion. Specifically, RecConv achieves a parameter expansion of only $\ell+2$ times and a maximum FLOPs increase of $5/3$ times, compared to the exponential growth ($4^\ell$) of standard and depthwise convolutions. RecNeXt-M3 outperforms RepViT-M1.1 by 1.9 $AP^{box}$ on COCO with similar FLOPs. This innovation provides a promising avenue towards designing efficient and compact networks across various modalities. Codes and models can be found at https://github.com/suous/RecNeXt.

Mingshu Zhao、Yi Luo、Yong Ouyang

计算技术、计算机技术

Mingshu Zhao,Yi Luo,Yong Ouyang.RecConv: Efficient Recursive Convolutions for Multi-Frequency Representations[EB/OL].(2025-06-28)[2025-07-16].https://arxiv.org/abs/2412.19628.点此复制

评论