Efficient Column-Wise N:M Pruning on RISC-V CPU
Efficient Column-Wise N:M Pruning on RISC-V CPU
In deep learning frameworks, weight pruning is a widely used technique for improving computational efficiency by reducing the size of large models. This is especially critical for convolutional operators, which often act as performance bottlenecks in convolutional neural networks (CNNs). However, the effectiveness of pruning heavily depends on how it is implemented, as different methods can significantly impact both computational performance and memory footprint. In this work, we propose a column-wise N:M pruning strategy applied at the tile level and modify XNNPACK to enable efficient execution of pruned models on the RISC-V vector architecture. Additionally, we propose fusing the operations of im2col and data packing to minimize redundant memory accesses and memory overhead. To further optimize performance, we incorporate AITemplate's profiling technique to identify the optimal implementation for each convolutional operator. Our proposed approach effectively increases ResNet inference throughput by as much as 4.0x, and preserves ImageNet top-1 accuracy within 2.1\% of the dense baseline.
Chi-Wei Chu、Ding-Yong Hong、Jan-Jan Wu
计算技术、计算机技术
Chi-Wei Chu,Ding-Yong Hong,Jan-Jan Wu.Efficient Column-Wise N:M Pruning on RISC-V CPU[EB/OL].(2025-07-23)[2025-08-10].https://arxiv.org/abs/2507.17301.点此复制
评论